OpenZFS 2.3 on 14.3-STABLE: boot fails with "ZFS: unsupported feature: org.openzfs:raidz_expansion" after `zpool upgrade`

Issue Description​

Booting a ZFS root using gptzfsboot or loader.efi from either 14.3-STABLE or 15.0-PRERELEASE fails with:

Code:
ZFS: unsupported feature: org.openzfs:raidz_expansion

Background​

I wanted to expand the raidz1 zpool where the root volume is, on 14.3-STABLE:

- installed the openzfs 2.3 package
- configured system to use openzfs
- upgaded and expanded raidz1 zpool

The next boot failed with the boot loader showing the error.

My situation looks the same as the one described in this post:


Attempts to Resolve​

- reinstalled the bootcode from FreeBSD 15 memstick
- installed FreeBSD 15 into a new ZFS volume, chrooted and installed the bootcode from there
- removed the freebsd-boot partition, created EFI partition, installed loader.efi


any ideas? thanks :)
 
Thanks for the reply, do I need to build it myself or are other builds available for downloading?

I got the snapshots from https://download.freebsd.org/snapshots/amd64/15.0-PRERELEASE/

Date is 2025-Aug-22, REVISION is at 027be99b1f33.




In any case I'm afraid those changes will not help in my case.

I'm trying to figure out what are the zpool features supported by gptzfsboot.

The docs are outdated, the latest in /usr/share/zfs/compatibility.d is freebsd-13.2 but is related to zfs-2.1


I think it's these:



C:
static const char *features_for_read[] = {
    "com.datto:bookmark_v2",
    "com.datto:encryption",
    "com.delphix:bookmark_written",
    "com.delphix:device_removal",
    "com.delphix:embedded_data",
    "com.delphix:extensible_dataset",
    "com.delphix:head_errlog",
    "com.delphix:hole_birth",
    "com.joyent:multi_vdev_crash_dump",
    "com.klarasystems:vdev_zaps_v2",
    "org.freebsd:zstd_compress",
    "org.illumos:lz4_compress",
    "org.illumos:sha512",
    "org.illumos:skein",
    "org.open-zfs:large_blocks",
    "org.openzfs:blake3",
    "org.zfsonlinux:large_dnode",
    NULL
};

So if this is correct, even in the latest commit there's no support for booting from a zpool with the `org.openzfs:raidz_expansion` feature enabled.
 
What is the expectation of "I installed 14-RELEASE against the default version of OpenZFS for the release, but I then pkg installed a dev version of OpenZFS and did zpool upgrade"?
Does installing openzfs from packages also have a new loader available? Or does "quarterly openzfs pkg maintain compatibility with -RELEASE loader?"
 
Hmm, I have a slightly older 15.0-CURRENT (it's still -CURRENT, not -PRERELEASE) than the 15.0-PRERELEASE snapshot you used.

Code:
dice@chibacity:~ % zfs -V
zfs-2.3.99-443-FreeBSD_g69ee01aa4
zfs-kmod-2.3.99-443-FreeBSD_g69ee01aa4
According to zpool-features(7) raidz_expansion is supported.

Code:
     raidz_expansion
             GUID                  org.openzfs:raidz_expansion
             DEPENDENCIES          none
             READ-ONLY COMPATIBLE  no

             This feature enables the zpool attach subcommand to attach a new
             device to a RAID-Z group, expanding the total amount usable space
             in the pool.  See zpool-attach(8).

So, I kind of expect the boot loader from your 15.0-PRERELEASE to support it. Can't test it though.
 
FreeBSD-CURRENT, as a development version and being main, is 'constantly on the move'.
FreeBSD-STABLE to a lesser extend, even though also a development version.

[...] Booting a ZFS root using gptzfsboot or loader.efi from either 14.3-STABLE or 15.0-PRERELEASE fails with:
Code:
ZFS: unsupported feature: org.openzfs:raidz_expansion
[...]
I wanted to expand the raidz1 zpool where the root volume is, on 14.3-STABLE:

- installed the openzfs 2.3 package
- configured system to use openzfs
- upgaded and expanded raidz1 zpool
[...]

Installing packages containing kernel modules, such as filesystems/openzfs-kmod must be matched to your running kernel, for stable/14 (using v. 1403505), that means:
Code:
# pkg rquery -x '%R %o %n %v' 'openzfs' | column -t
FreeBSD        filesystems/openzfs       openzfs       2.3.3,1
FreeBSD        filesystems/openzfs-kmod  openzfs-kmod  2.3.3.1402000,1
FreeBSD-kmods  filesystems/openzfs-kmod  openzfs-kmod  2.3.3.1403504,1
You should have the last one installed with the version number containing the string 1403504; similar when using -CURRENT. Please verify your pkg configuration setup, especially for the FreeBSD-kmods repository; this may help.

Furthermore, you may have followed a particular installation sequence of current and stable that may have led to an installation of a root-on-zfs where the root pool already has incorporated the supported relevant feature flag org.openzfs:raidz_expansion where tha may clash with for example -STABLE. The table at Feature flags implementation per OS shows:
Rich (BB code):
Feature Flag                [...] OpenZFS (Linux, FreeBSD 13+)
                            [...] 2.2.8 2.3.4  master
org.openzfs:raidz_expansion [...] no    yes    yes
This particular feature flag is mentioned with OpenZFS 2.3.4 and master, that is not present in filesystems/openzfs-kmod

Based on the latest repository for -CURRENT, you'd get:
Code:
 # pkg rquery -x '%R %o %n %v' 'openzfs' | column -t
FreeBSD-kmods  filesystems/openzfs-kmod  openzfs-kmod  2.3.3.1500053,1
However, as mentioned:
Try a more recent version of 15.0-CURRENT. OpenZFS 2.4.0 rc1 was imported quite recently.
which should support this feature flag fully.

Alternatively, for example for stable/14, you could get the suitable OpenZFS version from OpenZFS directly; however, I'm unsure if that will get gptzfsboot(8) with the expected behaviour.
 
-current has 2.3 and the raidexpand feature. But that isn't the issue.

The issue is that the boot code seems to be behind and hasn't been told what to do with raidexpand.
 
FreeBSD-CURRENT, as a development version and being main, is 'constantly on the move'.
FreeBSD-STABLE to a lesser extend, even though also a development version.



Installing packages containing kernel modules, such as filesystems/openzfs-kmod must be matched to your running kernel, for stable/14 (using v. 1403505), that means:
Code:
# pkg rquery -x '%R %o %n %v' 'openzfs' | column -t
FreeBSD        filesystems/openzfs       openzfs       2.3.3,1
FreeBSD        filesystems/openzfs-kmod  openzfs-kmod  2.3.3.1402000,1
FreeBSD-kmods  filesystems/openzfs-kmod  openzfs-kmod  2.3.3.1403504,1
You should have the last one installed with the version number containing the string 1403504; similar when using -CURRENT. Please verify your pkg configuration setup, especially for the FreeBSD-kmods repository; this may help.

Furhtermore, you may have followed a particular installation sequence of current and stable that may have led to an installation of a root-on-zfs where the root pool already has incorporated the supported relevant feature flag org.openzfs:raidz_expansion where tha may clash with for example -STABLE. Table at Feature flags implementation per OS:
Rich (BB code):
Feature Flag                [...] OpenZFS (Linux, FreeBSD 13+)
                            [...] 2.2.8 2.3.4  master
org.openzfs:raidz_expansion [...] no    yes    yes
This particular feature flag seems only supported as of OpenZFS 2.3.4, that is not present in filesystems/openzfs-kmod

Based on the latest repository for -CURRENT, you'd get:
Code:
 # pkg rquery -x '%R %o %n %v' 'openzfs' | column -t
FreeBSD-kmods  filesystems/openzfs-kmod  openzfs-kmod  2.3.3.1500053,1
However, as mentioned:

which should support this feature flag fully.

Alternatively, for example for stable/14, you could get the suitable OpenZFS version from OpenZFS directly.

Thanks for the answer Erichans,

openzfs-2.3 and the switch from zfs-2.2 worked great on 14.3-STABLE; the raidz expansion feature is indeed supported and the expansion itself went smoothly.

The problem is booting from the zpool with the upgraded features from zfs 2.3, specifically the boot loader complains about raidz_expansion.

So ZFS itself works fine within the OS; the boot code probably needs to support some of the newly introduced features, or rather whitelist them.

This seems to be a problem with both 14.3-STABLE and 15.0.

I don't know what the implications of whitelisting a feature are, but I'm gonna take covacat's advice and compile gptzfsboot with the whitelisted feature, will keep you posted.
 
I'm not sure whitelisting of the feature in the bootcode is a fully correct solution.

The raid expand code is very complicated and involves on-disk structures. Think about a reboot that happens in the middle of an expansion. Both the kernel and the bootcode need to handle that.

If the machine is not important or sufficiently backed up, and you don't actually use the raidexpand feature, then chances are that just ignoring this flag will bring the machine up.
 
Code:
diff --git a/stand/libsa/zfs/zfsimpl.c b/stand/libsa/zfs/zfsimpl.c
index f15d9b016068..7add9a9be509 100644
--- a/stand/libsa/zfs/zfsimpl.c
+++ b/stand/libsa/zfs/zfsimpl.c
@@ -127,6 +127,7 @@ static const char *features_for_read[] = {
        "org.illumos:skein",
        "org.open-zfs:large_blocks",
        "org.openzfs:blake3",
+       "org.openzfs:raidz_expansion",
        "org.zfsonlinux:large_dnode",
        NULL
 };

Why is openzfs spelled with a dash one time and without the other time?
 
I'm not sure whitelisting of the feature in the bootcode is a fully correct solution.

The raid expand code is very complicated and involves on-disk structures. Think about a reboot that happens in the middle of an expansion. Both the kernel and the bootcode need to handle that.

If the machine is not important or sufficiently backed up, and you don't actually use the raidexpand feature, then chances are that just ignoring this flag will bring the machine up.

I also think there must be a reason why only some are supported; however I couldn't find more info about it so far; if you find anything please share!


I would expect that any zpool created by a 15.0 installation, which comes with zfs-2.3 (well, 2.4 now :D), would have those features enabled by default:

From https://man.freebsd.org/cgi/man.cgi?zpool-create(8)

Code:
By default all supported    features are enabled on    the new    pool.

Does that mean that 15.0 can't boot from the zpool it creates without explicitly disabling the feature?
 
Does that mean that 15.0 can't boot from the zpool it creates without explicitly disabling the feature?

That's possible. Not everybody running -current runs ZFS, fewer create new pools regularly on existing systems and even fewer boot from them.
 
Does that mean that 15.0 can't boot from the zpool it creates without explicitly disabling the feature?
In my view, probably yesnot; that is: it can boot from the ZFS pool it creates, however:
- upgaded and expanded raidz1 zpool
you have actually used this feature, so now it is active; that's also the reason for the concern cracauer@ mentioned when booting with a white-listed addition.
Edit: a supported feature flag can be set to disabled at pool creation time. In addition to that, when using zpool-upgrade(8) to an existing ZFS pool, a previously not-supported feature flag, can be prevented from becoming enabled by using zpool-features - Compatibility feature sets.

ZFS feature flags are tristated: disabled, enabled or active. I don't know if current has this one enabled by default or not[1]. When only enabled a transition to disabled should be possible[2]. When active it stays that way[3], unless you happen to have set a checkpoint on the pool before it became active. Then that could be used to rewind the pool using zpool-import(8); note, however, that would mean losing all changes to the pool after the checkpoint was set.

I've detailed some further info on feature flags here.

___
[1] Edit:
They seem to be enabled by default when supported; zpool-create(8) (my emphasis):
Rich (BB code):
DESCRIPTION
   [...]
       By default all supported	features are enabled on	the new	pool.  The  -d
       option  and  the	 -o compatibility property (e.g	-o compatibility=2020)
       can be used to restrict the features that are enabled, so that the pool
       can be imported on other	releases of ZFS.

Furthermore, zpool-features(7) (my emphasis):
Rich (BB code):
   Feature states
       Features	can be in one of three states:
   [...]
       disabled	 This  feature's on-disk format	changes	have not been made and
		 will not be made unless an administrator moves	the feature to
		 the enabled state.  Features cannot  be  disabled  once  they
		 have been enabled.

[2],[3] Edit:
That should have been: when active, it might be possible to return to enabled; zpool-features(7) (my emphasis):
Rich (BB code):
   Feature states
       Features	can be in one of three states:
   [...]
       enabled	 An administrator has marked this feature as  enabled  on  the
		 pool,	but the	feature's on-disk format changes have not been
		 made yet.  The	pool can still be imported  by	software  that
		 does not support this feature,	but changes may	be made	to the
		 on-disk format	at any time which will move the	feature	to the
		 active	 state.	  Some	features  may support returning	to the
		 enabled state after becoming  active.	 See  feature-specific
		 documentation for details.
To be clear, for obvious reasons org.openzfs:raidz_expansion does not fall in that category. One such documented case where a transition from active to enabled is possible, zpool-features(7):
Code:
       zstd_compress
	       GUID		     org.freebsd:zstd_compress
	       DEPENDENCIES	     extensible_dataset
	       READ-ONLY COMPATIBLE  no
[...]
	       When the	zstd feature is	set to enabled,	the administrator  can
	       turn   on  zstd	compression  of	 any  dataset  using  zfs  set
	       compress=zstd dset  (see	 zfs-set(8)).	This  feature  becomes
	       active  once a compress property	has been set to	zstd, and will
	       return to being enabled once all	filesystems that have ever had
	       their compress property set to zstd are destroyed.
 
I don't know if current has this one enabled by default or not.
According to zpool-create(8), they are:
Code:
DESCRIPTION
[...]
       By default all supported    features are enabled on    the new    pool.  The  -d
       option  and  the     -o compatibility property (e.g    -o compatibility=2020)
       can be used to restrict the features that are enabled, so that the pool
       can be imported on other    releases of ZFS.

So if this is correct, even in the latest commit there's no support for booting from a zpool with the `org.openzfs:raidz_expansion` feature enabled.
Rich (BB code):
     raidz_expansion
             GUID                  org.openzfs:raidz_expansion
             DEPENDENCIES          none
             READ-ONLY COMPATIBLE  no
[...]
So, I kind of expect the boot loader from your 15.0-PRERELEASE to support it. Can't test it though.
zpool-features(7):
Code:
DESCRIPTION
   [...]
Read-only compatibility
       Some features may make on-disk format changes  that  do    not  interfere
       with  other  software's    ability    to read    from the pool.    These features
       are referred to as "read-only compatible".  If all unsupported features
       on a pool are read-only compatible, the pool can    be imported  in     read-
only mode by setting the readonly    property  during  import  (see
zpool-import(8) for details on importing pools).
Given the the zpool-features(7) description quote, the "READ-ONLY COMPATIBLE no" doesn't bode well for being able to boot from when:
  1. using stable/14 which comes with a gptzfsboot(8) matched to its releng/14.3 - OpenZFS Version 2.2.7
  2. using -CURRENT when not having an adapted zfs-on-root bootable infrastructure
If the latter can be confirmed not to be bootable, then that should be reported in the freebsd-current ML or a PR should be submitted.
 
Thanks for opening that review.

It seems that OpenZFS 2.3 - release:
Code:
Supported Platforms:

    Linux kernels 4.18 - 6.12,
    FreeBSD releases 13.3, 14.0 - 14.2.
would probably have benefitted from an extra asterisk: when used in a root pool with org.openzfs:raidz_expansion enabled or active, the OS should provide a bootable infrastructure capable of booting from it.

There are far more feature flags that have READ-ONLY COMPATIBLE no and some might also affect the ZFS on-disk format in an irreversible way, so I'm also wondering if there are additional similar problems around the corner when trying to boot from a pool that has those feature flags active or enabled.

D52174#1191961:
Code:
So this is an installer bug? We should fail here?
I think* blocking it by the installer does not help when having an older ZFS-on-root with a RAIDZx pool as root and then progressing to a new ZFS version that incorporates org.openzfs:raidz_expansion; that will affect the root pool when doing a zpool-upgrade(8) that enables the feature. So, indeed, as mentioned there,more smarts seem required.

___
* I don't have a Phabricator account.
 
Well, if you have a freshly created zpool with an unwanted feature set at creation time there is nothing to roll back to.
If you've created the pool 'as is', then I agree; this is the typical use.

At zpool-upgrade time, you have the option of not-enabling supported feature flags that had been previously unsupported; at pool creation time you also have the option of not-enabling supported feature flags. When setting a checkpoint when a feature flag is set to disabled, you have something to rewind to, although this seems very easy to overlook/not using this option when using a standard install; I assume you have to 'escape' to manual installation. In the general, using a checkpoint other then for a quick return to a recent previous state seems to me an (big) exception. Such an exception may be a necessary rewind to a state where your pool was certain to be free of virus infections; that might provide a quicker clean restart where unaffected lost data could be restored from backups.
 
Following up on the question Mer asked above (which went unanswered), may I ask a stupid background question?

This whole problem only arises when one installs an OpenZFS package which is very up-to-date,, meaning one is running the root file system using kernel code that is not -RELEASE?
 
Following up on the question Mer asked above (which went unanswered), may I ask a stupid background question?

This whole problem only arises when one installs an OpenZFS package which is very up-to-date,, meaning one is running the root file system using kernel code that is not -RELEASE?

15-PRERELEASE has kernel code that supports the feature but the bootloader does not.
 
Following up on the question Mer asked above (which went unanswered), may I ask a stupid background question?

This whole problem only arises when one installs an OpenZFS package which is very up-to-date,, meaning one is running the root file system using kernel code that is not -RELEASE?

I upgraded ZFS on 14.3-STABLE by installing the openzfs-2.3 pkg and enabling it.

I then activated the raidz_expansion feature by upgrading the zpool and expanding the raidz array.

From there on, 14.3-STABLE was unable to boot.

So I booted the 15.0-PRERELEASE memstick and manually installed 15.0-PRERELEASE on a new zfs volume as an attempt to fix the issue.

In my case I also had installed 14.3-STABLE manually initially, from a Linux installation on the same zpool. Then installed the boot code from the 14.3-STABLE memstick.
 
Back
Top