ZFS When is it safe to do a zpool upgrade and when is it not safe ?

Simple question, isnt ? For the moment i don't dare ...
How will it work when i import in void-linux or alpine-linux ?
What if I switch from the base zfs kmod to the openzfs kmod ?
 
I don't mix my ZFS pools with Linux so I can't comment on that, besides stating that you should be careful when doing that. Don't blindly upgrade your pools as you might run the risk making them unusable on other systems.

For upgrades with FreeBSD itself, I only upgrade the pools when I'm dead certain I don't have to rollback a version or 'dual-boot' between FreeBSD 12 and 13 for example. That's the same for switching between FreeBSD's ZFS and OpenZFS, don't upgrade if you want or need to switch back.
 
The zfs kmod is more mature than the openzfs-kmod , it is longer tested. On the other hand the openzfs-kmod will have quicker and more bugfixes because it does not rely on the hystorical solaris source. What is your personal view on this. Should users who have a zpool made with the zfs-kmod change to openzfs-kmod ? If i'm correct to openzfs-kmod is the future.
 
A very general statement is:
Don't zpool upgrade if you think you need to import the pool on a lower version of ZFS.

The why:
New versions may add features/flags/stuff that the older versions don't understand. The old versions almost always refuse to import as read/write the new version. It may be possible to import the new version as readonly, which means you could pull data off it, but you could not modify it in any way. If the new features/flags/stuff are not actually in use on the pool, the older version may allow a read/write import.

This becomes important with Boot Environments and loaders (gptzfsboot in particular). Newer versions of bootcode should be able to boot older versions because they should maintain backwards compatibility, but older versions of bootcode may not be able to boot newer versions of zpools.

I tend to update bootcode if I zpool upgrade my boot pool. I tend to not zpool upgrade my boot pool until I've destroyed all Boot Environments from an older version. Example:
System running FreeBSD-12.x. Did freebsd-update to FreeBSD-13.0-RELEASE chrooted into a new BE. Updated gptzfsboot on the boot devices with the one from the FreeBSD-13.0-RELEASE BE, activated new BE then rebooted.
Made sure everything was working (wait a week, reboot a few times, etc), blew away FreeBSD-12 BEs, then zpool upgrade the boot pool.

Edit:
It should be obvious, but I'm just going to state that I tend to be conservative regarding upgrades/updates. It's easier in the long run if you slow down and do things methodically. Learned that the hard way and it only took twice to make it stick.
 
I receive no clear responses about the choise. zfs-kmod or openzfs-kmod. Maybe providing a clear response it not easy. Which is OK, but it would be nice to have some general guidelines applicable to most cases, lets say for 90%.
Sometimes you go for the newer stuff and this has a filter it. As long as you are aware of it there is no problem.
 
Should users who have a zpool made with the zfs-kmod change to openzfs-kmod ?
I receive no clear responses about the choise. zfs-kmod or openzfs-kmod.
Stick to the version that comes with your version of FreeBSD. Which means on anything below 13.0 it's FreeBSD's own ZFS and 13.0 and later OpenZFS.

If i'm correct to openzfs-kmod is the future.
13-STABLE recently imported OpenZFS 2.1, which means 13.1-RELEASE will have OpenZFS 2.1.

 
I receive no clear responses about the choise. zfs-kmod or openzfs-kmod. Maybe providing a clear response it not easy. Which is OK, but it would be nice to have some general guidelines applicable to most cases.

As of FreeBSD-13.0-RELEASE, zfs-kmod in the system is OpenZFS. Whatever local changes FreeBSD makes are getting pushed back upstream to the OpenZFS project.
The sysutils/openzfs-kmod port is basically development version of OpenZFS, so yes it tracks the upstream closer, current version in ports is roughly June 2021.

I'm not sure of the exact process being used to pull in OpenZFS into base, but I can imagine that at some point the port winds up in base and the port keeps close to upstream.

All that said, if you are running FreeBSD-13.0-RELEASE or better, the only reason to use the OpenZFS port is you are interested in features not yet in base, you are helping with development of OpenZFS and changes for FreeBSD.

If you are running FreeBSD-12.0, then if you are interested in migrating to OpenZFS as an early test before upgrading systems to FreeBSD-13.
 
… How will it work when i import in void-linux or alpine-linux ?
What if I switch from the base zfs kmod to the openzfs kmod ?

If you're unsure about using physical machines to discover the pros and cons, give yourself four virtual machines (I use VirtualBox, YMMV):
In addition to the boot disk for each machine:
  • create just one additional virtual disk, to be attached to one machine at a time
– experiment. Use this disk to discover how ZFS, or OpenZFS, responds when presented with a pool with a full set, or subset, of features active.

zpool-features(7)

That's not methodical advice, because I'm happily ignorant of features of Alpine and Void. Experiments with OpenZFS are enlightening.
 
I am using sysutils/openzfs and sysutils/openzfs-kmod on a few 12.2 hosts that cannot be upgraded to 13.0 due to stripped out hardware support in 13.0. I found that OpenZFS on 12.2 doesn't run properly straight from the package, some work is needed with missing rc scripts and replacing the original zfs programs in /sbin with symlinks to the replacements in /usr/local/sbin .

I have noticed recently that freebsd-update wants to replace my symlink for zpool with a binary that is different to the original 12.2 zpool. I have also noticed that if the machine is booted with this new zpool binary that it can correctly mount datasets created by OpenZFS, which is something that the original 12.2 zpool could not do. The new binary is smaller than the sysutils/openzfs packaged version.

I can't find an errata or advisory for zpool that mentions this new ability to mount OpenZFS datasets. Have you seen one that I have missed? What I would like to know is, can I safely use the freebsd-update installed zpool binary which I am seeing on machines already at 12.2p8, or should I keep replacing it with a symlink to the OpenZFS packaged version in /usr/local/sbin ?
 
The sysutils/openzfs and sysutils/openzfs-kmod I am using report the following version:

Code:
zfs-2.0.0-rc1
zfs-kmod-v2021030100-zfs_2e160dee9

I will try and get version information on a non-production host that I will let freebsd-update replace zpool. I don't have anything running it at the moment to do a comparison with.
 
I don't know the answer to your question, but did you actually create new pools with the openzfs kmod or were the pools originally created with the ZFS native in FreeBSD-12?
If the pools were created with FreeBSD-12, did you run zpool upgrade after starting to use openZFS?

If the pools were created with FreeBSD-12 and were NOT upgraded after using openZFS, I think you are safe because the zpools are still technically FreeBSD-12.

If you did zpool upgrade after starting with openZFS, it may cause problems. If you have not created any new pools or enabled any openzfs features, you may be ok.

Basically, I really don't know.
 
I am using sysutils/openzfs and sysutils/openzfs-kmod on a few 12.2 hosts that cannot be upgraded to 13.0 due to stripped out hardware support in 13.0. I found that OpenZFS on 12.2 doesn't run properly straight from the package, some work is needed with missing rc scripts and replacing the original zfs programs in /sbin with symlinks to the replacements in /usr/local/sbin .

I have noticed recently that freebsd-update wants to replace my symlink for zpool with a binary that is different to the original 12.2 zpool. I have also noticed that if the machine is booted with this new zpool binary that it can correctly mount datasets created by OpenZFS, which is something that the original 12.2 zpool could not do. The new binary is smaller than the sysutils/openzfs packaged version.

I can't find an errata or advisory for zpool that mentions this new ability to mount OpenZFS datasets. Have you seen one that I have missed? What I would like to know is, can I safely use the freebsd-update installed zpool binary which I am seeing on machines already at 12.2p8, or should I keep replacing it with a symlink to the OpenZFS packaged version in /usr/local/sbin ?
You must set the path /usr/local/sbin comes before /usr/sbin or /sbin
Because binaries in directories have the same name. Then normally you must not use symlinks.
 
I don't know the answer to your question, but did you actually create new pools with the openzfs kmod or were the pools originally created with the ZFS native in FreeBSD-12?

All 12.2 pools other than 'zroot' were created in OpenZFS 2.0 with encryption feature enabled, but without encrypted datasets (yet). I am using encrypted datasets on some of the 13.0 hosts.

As far as I can remember, a non root pool created in 12.2 ZFS(base), then upgraded to OpenZFS cannot be mounted after a reboot.
 
freebsd-update wants to replace my symlink for zpool with a binary that is different to the original 12.2 zpool. I have also noticed that if the machine is booted with this new zpool binary that it can correctly mount datasets created by OpenZFS, which is something that the original 12.2 zpool could not do. …

You'd need to look at features of the pool.

man 7 zpool-features

Initial support for feature flags was committed nine years ago: <https://cgit.freebsd.org/src/commit/?id=2d9cf57e18654edda53bcb460ca66641ba69ed75>
 
You must set the path /usr/local/sbin comes before /usr/sbin or /sbin
Because binaries in directories have the same name. Then normally you must not use symlinks.
From a penetration standpoint, my first point of attack on a system is the PATH variable. It enables me to leave a similarly named script file masquerading as a system binary in an executable directory that I have write access to. I only need to wait for the PATH variable to be reordered to what I want it to be in order to seize control of the host. I can daisy-chain to the real command so that the end user is none the wiser and the breach can continue undetected indefinitely. I can always switch the PATH back and forward from what you think it should be, to what I want it to be to avoid detection.

From my own experience, my preference is to NEVER RELY ON THE PATH VARIABLE to be true. Always hard code full path names for binaries into your scripts including the start up scripts. Symlinks are far safer than the PATH.

For OpenZFS 2.0 on 12.2, I renamed the original z* binaries to z*.orig and chmod them to 444.
I then created symlinks for each deactivated binary in /sbin pointing back to the replacements in /usr/local/sbin . It works just fine for me up until freebsd-update .
 
So if i'm correct after each upgrade you must redo "I renamed the original z* binaries to z*.orig and chmod them to 444."
Certainly not a bad idea to rename the originals.
So after each upgrade just,
Code:
mv /sbin/zpool /sbin/zpool.orig
mv /sbin/zfs /sbin/zfs.orig
 
Yes, that's what I am effectively doing. Actually, I have an Ansible playbook that does this. It detects the new binary from freebsd-update and switches everything back to using the packaged versions of openzfs. I foolishly had not envisioned that the base zpool or any other z* binary might be replaced by freebsd-update before these hosts got upgraded to 13.0 . I need to establish if the new zpool binary is feature compatible with the packaged version so that I can change my playbook to accept it's presence in /sbin . I have a plan now thanks to grahamperrin.
 
This is my zfs base move script. Files are provided by openzfs,
Code:
mv /usr/sbin/zdb             /sbinbackup
mv     /sbin/zfs             /sbinbackup
mv  /usr/bin/zinject         /sbinbackup
mv     /sbin/zpool           /sbinbackup
mv  /usr/bin/zstream         /sbinbackup
mv  /usr/bin/zstreamdump     /sbinbackup
mv  /usr/bin/ztest           /sbinbackup
cd /sbin
ln -s /usr/local/bin/zdb
ln -s /usr/local/bin/zfs
ln -s /usr/local/bin/zinject
ln -s /usr/local/bin/zpool 
ln -s /usr/local/bin/zstream
ln -s /usr/local/bin/zstreamdump
ln -s /usr/local/bin/ztest
 
From a penetration standpoint, my first point of attack on a system is the PATH variable. It enables me to leave a similarly named script file masquerading as a system binary in an executable directory that I have write access to.
Theoretically correct. In practice, anyone who has write access to /usr/local/bin and /usr/local/sbin has likely root-like powers, and therefore can also modify /usr/bin and /usr/sbin. I know that theoretically they could be protected differently, but that would be very unusual.

From my own experience, my preference is to NEVER RELY ON THE PATH VARIABLE to be true. Always hard code full path names for binaries into your scripts including the start up scripts.
Good advice. Or make the scripts self-contained: Right at the beginning, set the path what it should be, then rely on it. But the problem with this technique is that you have now introduced coupling, between where files are stored and the scripts that use them. So from now on, when you change one side, you have to update the other.

Symlinks are far safer than the PATH.
Except that you have to store your symlinks in places like /usr/bin, which programs like freebsd-update think they have freedom to modify during upgrades. So after every upgrade you have to recheck the links.

By the way, as much as I point out the problems with this approach, I use it too. For example, to make life easier, I have Python programs that can be used on multiple platforms. But the python executable is stored in different places on different platforms, yet I need to know its location in the shebang line of the script. So I now have the convention that on every machine that I control (and where these scripts need to run), I will have /usr/local/bin/python3, which can be a soft link, or the real exucutable, but it will work.
 
Back
Top