ZFS Importing a ZFS-on-Linux pool to FreeBSD

I've got a large raidz2 pool created under ZFS On Linux that I'd like to import to FreeBSD. I've been told that it is possible as long as the enabled options from ZOL are available under FreeBSD. I don't really know what is available though, and I couldn't find anything concise enough to convince me to risk the changeover so I thought I'd ask you guys.

I've pasted the output of # zpool get all below. With this information, can you tell if it would be possible to import this pool into FreeBSD without issue?

Thanks
Code:
[LIST=1]
[*]NAME  PROPERTY                    VALUE                       SOURCE
[*]data  size                        14.5T                       -
[*]data  capacity                    51%                         -
[*]data  altroot                     -                           default
[*]data  health                      ONLINE                      -
[*]data  guid                        8711605967801388191         default
[*]data  version                     -                           default
[*]data  bootfs                      -                           default
[*]data  delegation                  on                          default
[*]data  autoreplace                 off                         default
[*]data  cachefile                   -                           default
[*]data  failmode                    wait                        default
[*]data  listsnapshots               off                         default
[*]data  autoexpand                  off                         default
[*]data  dedupditto                  0                           default
[*]data  dedupratio                  1.00x                       -
[*]data  free                        6.99T                       -
[*]data  allocated                   7.51T                       -
[*]data  readonly                    off                         -
[*]data  ashift                      0                           default
[*]data  comment                     -                           default
[*]data  expandsize                  -                           -
[*]data  freeing                     0                           default
[*]data  fragmentation               11%                         -
[*]data  leaked                      0                           default
[*]data  feature@async_destroy       enabled                     local
[*]data  feature@empty_bpobj         enabled                     local
[*]data  feature@lz4_compress        active                      local
[*]data  feature@spacemap_histogram  active                      local
[*]data  feature@enabled_txg         active                      local
[*]data  feature@hole_birth          active                      local
[*]data  feature@extensible_dataset  enabled                     local
[*]data  feature@embedded_data       active                      local
[*]data  feature@bookmarks           enabled                     local
[*]data  feature@filesystem_limits   enabled                     local
[*]data  feature@large_blocks        enabled                     local
[/LIST]
 
This is the feature list from a 11.0-RELEASE machine, which seems to have the same features, so you'd hope it would import but cross-OS ZFS compatibility seems to be a bit of a minefield recently
Code:
sys      feature@async_destroy          enabled                        local
sys      feature@empty_bpobj            active                         local
sys      feature@lz4_compress           active                         local
sys      feature@multi_vdev_crash_dump  enabled                        local
sys      feature@spacemap_histogram     active                         local
sys      feature@enabled_txg            active                         local
sys      feature@hole_birth             active                         local
sys      feature@extensible_dataset     enabled                        local
sys      feature@embedded_data          active                         local
sys      feature@bookmarks              enabled                        local
sys      feature@filesystem_limits      enabled                        local
sys      feature@large_blocks           enabled                        local
sys      feature@sha512                 enabled                        local
sys      feature@skein                  enabled                        local
 
Maybe it would be worth creating a pool on a usb stick on the Linux machine, then seeing what happens if you try and import it on FreeBSD. If a small pool can be moved successfully from the same ZFSOnLinux version to FreeBSD, then it'd be reasonable to expect the bigger pool should work.
 
Maybe it would be worth creating a pool on a usb stick on the Linux machine, then seeing what happens if you try and import it on FreeBSD. If a small pool can be moved successfully from the same ZFSOnLinux version to FreeBSD, then it'd be reasonable to expect the bigger pool should work.

That's a good idea. I'll try that later tonight or tomorrow and report back on the result. It'd probably be useful information for others as well.
 
... cross-OS ZFS compatibility seems to be a bit of a minefield recently
Oh inhowfar?
I never had problems with zfs pools that I created with FreeBSD and then used on Linux (debian, ubuntu, opensuse, all worked fine)
 
Personally I only really use FreeBSD so never have a problem.

I'm on the forums/mailing lists quite a lot and have seen numerous posts from people who have had problems moving between operating systems. Feature flags were added to allow different companies to work on OpenZFS at the same time without breaking cross-OS support, which it does do. However, there's been a lot of flags added over the past few years, and it's designed to enable all flags by default, so it's very easy to have featured become enabled without realising. If you move to another OS, it has to support all those active features in order to import the pool. With different features being developed on different platforms, users have found that some feature flag has been enabled on their pool that isn't yet supported by the OS they want to import on.

It seems new feature flags has slowed down recently so things have stabilized a bit. I just wish the native encryption support from ZFSOnLinux would get ported to FreeBSD, and something similar to the two stage resilver on Oracle ZFS.
 
just had sat through a talk on ZFS.

You "should" be able to import the ZFS pool from the linux machine onto the FreeBSD with no problems. BUT, going the other way FreeBSD -> Linux may not work at all, since the dataset on FreeBSD is more "mature" than the linux implemtation
 
I'm using ZFS on 2 USB-flashdrives between several FreeBSD, TrueOS, smartOS and debian/devuan linux machines without any problems. syncing the filesystems and properly exporting the pool is crucial, or some newly written files will be missing. I never had a completely unusable pool though, even when unplugging the drive while writing.

Comparing the available features upfront might be a good idea, so you can disable flags or create the pool on the system with the least advanced featureset.

The pool on my home storage server was created on a debian system and runs on freebsd since ~3 months. Even if the pool was working without a problem, I migrated the datasets into a newly created pool to get rid of the ext2 boot-partitions and the remains of grub on the disks. Migration was done by splitting the mirrored pool, purging one half of the disks and creating a new pool, then send/receive the datasets into the new pool.
 
I'm using ZFS on 2 USB-flashdrives between several FreeBSD, TrueOS, smartOS and debian/devuan linux machines without any problems.
Cool!
OT: Would be great to know whether flashdrives will live longer with ZFS or with UFS...
 
I only had 2 Flashdrives that died so far - back in the days when they had sizes in the range of 2-digit MB. Since then they were always replaced with bigger or faster ones or because I (or my desk...) lost them, way before they got really old.
In terms of wear by the filesystem I'd rather have ZFS telling me the drive is about to go belly up with increasing errors, than a Filesystem that makes the drive last a while longer, but just fails without any warnings or returning false data.

However, I worn out 2 dirt-cheap SSDs with ZFS within ~5 months by using them as L2ARC and ZIL in a testpool. So if you want to kill your USB-Pendrives really fast, use them as ZIL devices ;)
 
In terms of wear by the filesystem I'd rather have ZFS telling me the drive is about to go belly up with increasing errors, than a Filesystem that makes the drive last a while longer, but just fails without any warnings or returning false data.
This is an interesting point because, as I understand from various docs, in case of a single error your drive will be put offline, and your data will be available only if redundancy is available.
Are you using your two USB flash units using a mirrored pool ? Or am I misunderstanding the documentation ?
 
ZFS will only put the pool offline if it thinks the disk is faulted or vital metadata is corrupted. I think it may offline a disk if it gets too many errors but I'm not 100% on that, or what the limit is if it does. If you only have a few bad blocks that have corrupted file data, a scrub should find and identify the files (allowing you to remove them) without the pool going offline. I guess that's better than files being silently corrupted.

You can also increase the copies setting so that data is stored more than once, which can provide some fault tolerance on a single disk. This is no replacement for actual pool redundancy though, and I'd advise anyone with data on ZFS they want to keep safe to also back it up to another pool. ZFS can be very quick to decide a pool is broken if it wants to and data recovery is basically a non-starter.
 
If you only have a few bad blocks that have corrupted file data, a scrub should find and identify the files (allowing you to remove them) without the pool going offline.
That appear an incorrect belief to me.

from zpool(8)
" Device Failure and Recovery
ZFS supports a rich set of mechanisms for handling device failure and
data corruption. All metadata and data is checksummed, and ZFS automati‐
cally repairs bad data from a good copy when corruption is detected.

In order to take advantage of these features, a pool must make use of
some form of redundancy
, using either mirrored or raidz groups. While ZFS
supports running in a non-redundant configuration, where each root vdev
is simply a disk or file, this is strongly discouraged. A single case of
bit corruption can render some or all of your data unavailable
."
 
A single case of
bit corruption can render some or all of your data unavailable
.
That just confirms what I said as far as I can see. Corruption can render *some* of your data unavailable, i.e. the data that's actually corrupted. It can also cause the entire pool to fail if the corruption affects metadata vital to the pool, as I also said.

Just to add, the quote says you have to have redundancy to take advantage of "these features". Checksumming and scrub work regardless. It's only actual device failure and automatic recovery that don't work, for obvious reasons. However I'm sure repair is possible if you have copies > 1, otherwise that feature would be completely useless.
 
From my testing on USB-Drives I found that ZFS quickly refuses to import a pool if it finds inconsistency in its transactions (e.g. pulling out the drive during write). Anyhow, I managed to import any of those forcefully faulted pools with 'zfs import -fFn or -fFX', losing only the latest transactions, meaning newly incompletely written files were lost, changed files were back on the last consistent state. After scrubbing, these pools worked just fine.

I also tried formatting a drive in two partitions of identical size and use these partitions for mirroring. Of course there are many scenarios in which this approach won't help, but if it comes to simple bitflips/bitrot this should provide just enough of a safety net to not loose the data.

ZFS as already mentioned doesn't fault the whole pool on single errors. As long as the metadata of the pool itself is consistent, the pool will continue to work (more or less) normally, but ZFS will refuse to let you read the affected files if it can't repair them (e.g. no redundancy). These files will just vanish from your sight, although in one of the last BSDNow Episodes Allan Jude mentioned there are some experimental undocumented subcommands that might bring you back at least whats remaining of the data.


I don't and won't use USB flashdrives as permanent storage or say as a single backup target. I mainly use them as temporary storage to transport or migrate data between systems, so the data is also always at least on one system if not already within my backups. So I mostly use the flashdrives without redundancy to get more data on them. "Redundancy with an airgap" is also resilient to me losing one of these tiny pendrives on my desk or in my laptop bag ;)
 
  • Thanks
Reactions: ASX
Perhaps I didn't explained myself appropriately, but my doubts remain about detecting "increasing errors", in the USB flash drive context.

usdmatt , I was mainly referring to a single drive pool, as supposed for an USB flash device.
 
Back
Top