ZFS on 14.1 zfs suddenly not importable

Yesterday, my FreeBSD 14.1 did not boot from ZFS (on a single nvme SSD partition) again.
After loading the system, mounting root failed with error 2 (whatever that may be).
After some annoying tries to access the system either from a different disk or even from Linux, I found that the pool is importable on 14.1, but only readonly=on .

Is there a way to repair the (probably minor) problem?
If there is no better solution than "destroy and recreate", then I have to return to ufs, as this is the 2nd time this year.

Here are the outputs (from Linux and FreeBSD):
Code:
root@wbk1:~# zpool import
   pool: zroot
     id: 13989299616962548941
  state: UNAVAIL
status: The pool uses the following feature(s) not supported on this system:
    com.klarasystems:vdev_zaps_v2
action: The pool cannot be imported. Access the pool on a system that supports
    the required feature(s), or recreate the pool from backup.
 config:

    zroot       UNAVAIL  unsupported feature(s)
      nvme0n1   ONLINE
Code:
root@wbk1:~ # zpool import
   pool: zroot
     id: 13989299616962548941
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        zroot       ONLINE
          nda0p4    ONLINE
root@wbk1:~ # zpool import -f zroot
cannot import 'zroot': no such pool or dataset
        Destroy and re-create the pool from
        a backup source.
root@wbk1:~ #
 
You can try with the numeric id. That said, you must provide an altroot for mounting that pool, like:
zpool import -f -R /mnt 13989299616962548941.

Mount failed with error 2 means "unknown file system". You have to check if zfs.ko is loaded at the startup of your system. Look into /boot/loader.conf if there is this line: zfs_load="YES".
 
How are you booting? UEFI, BIOS? I ran into something similar, the fix was boot from live media and update your bootcode to latest version. if BIOS that's gptzfsboot not sure the correct one for EFI booting.
 
I found that the pool is importable on 14.1, but only readonly=on .
[...]
After some annoying tries to access the system either from a different disk or even from Linux, I found that the pool is importable on 14.1, but only readonly=on .

Here are the outputs (from Linux and FreeBSD):
[...]
>>> EDIT: this is on FreeBSD I assume
Code:
root@wbk1:~ # zpool import
   pool: zroot
     id: 13989299616962548941
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        zroot       ONLINE
          nda0p4    ONLINE
root@wbk1:~ # zpool import -f zroot
cannot import 'zroot': no such pool or dataset
        Destroy and re-create the pool from
        a backup source.
root@wbk1:~ #
Do you happen to have the output from mounting the pool read-only on FreeBSD?

P.S.
# zpool import -f zroot
I'd be hesitant in using -f, especially when it is not even suggested; as far as I can understand it you're using your root pool (=zroot) without any redundancy.
 
I have to return to ufs, as this is the 2nd time this year.
Check the health of the disk: sysutils/smartmontools

I had a similar problem Root-on-ZFS with a Samsung SSD 840 Pro, Power_On_Hours relative low, and a WD HD Power_On_Hours high. The pool got frequently inaccessible due to data corruption.

At the time I hadn't another disk to move from the Samsung SSD, installed on UFS, one day the disk died (before that, a replacement disk was put in service).

The WD hd is still working on UFS, but no critical data on disk.
 
status: The pool uses the following feature(s) not supported on this system: com.klarasystems:vdev_zaps_v2
To me this indicates "ran zpool upgrade and forgot to update bootcode". I could be wrong, but if the pool can be imported readonly, what is the output of "zpool history"? That will definitively tell us if zpool upgrade was run.
 
To me this indicates "ran zpool upgrade and forgot to update bootcode".
What I understand from the quote below, is that that was on Linux ...
(Personally, I'd be hesitant to do a zpool upgrade from Linux on a pool, especially on one without redundancy.)
Here are the outputs (from Linux and FreeBSD):
Code:
root@wbk1:~# zpool import
pool: zroot
id: 13989299616962548941
state: UNAVAIL
status: The pool uses the following feature(s) not supported on this system:
com.klarasystems:vdev_zaps_v2
action: The pool cannot be imported. Access the pool on a system that supports
the required feature(s), or recreate the pool from backup.
config:

zroot UNAVAIL unsupported feature(s)
nvme0n1 ONLINE
 
What I understand from the quote below, is that that was on Linux ...
(Personally, I'd be hesitant to do a zpool upgrade from Linux on a pool, especially on one without redundancy.)
Well, "which is which" :) I'm not assuming which system the output is from. I ran into a similar situation once, I think a different feature, but similar output, just on FreeBSD. Mine was stupidity on my part, mirror boot pool, I did the update of boot code on 1 of the mirror, forgot to on the other, zpool upgrade the mirror, systemboot decided to use the boot device I forgot to update.
 
You can try with the numeric id. That said, you must provide an altroot for mounting that pool, like:
zpool import -f -R /mnt 13989299616962548941.

Mount failed with error 2 means "unknown file system". You have to check if zfs.ko is loaded at the startup of your system. Look into /boot/loader.conf if there is this line: zfs_load="YES".
I tried numeric id. No difference.
The line zfs_load="YES" is present in /boot/loader.conf
How are you booting? UEFI, BIOS? I ran into something similar, the fix was boot from live media and update your bootcode to latest version. if BIOS that's gptzfsboot not sure the correct one for EFI booting.
I am booting UEFI. In the meantime I updated loader.efi. This made no difference. I am quite sure that the problem is inside the pool, because I can import it (with -f - /altroot), but only if -o readonly=on.

The good thing is: no data loss.

I just mentioned my Linux tries, because in January this saved my pool data. As I don't use Linux much, I didn't update the zfs since then.

Here is the tail from zpool history. As you see, the last entries are more than 5 days old, and the system has been successfullly booted between.
Code:
2024-01-06.14:48:07 zfs create zroot/bhyve
2024-01-06.14:48:59 zfs set recordsize=64k zroot/bhyve
2024-01-06.14:49:31 zfs create zroot/bhyve/.templates
2024-01-06.14:50:43 zfs create zroot/bhyve/9front
2024-01-06.14:52:09 zfs receive -F zroot/bhyve
2024-01-06.14:52:38 zfs receive -F zroot/bhyve/.templates
2024-01-06.14:53:56 zfs receive -F zroot/bhyve/9front
2024-01-06.14:25:16 zfs receive -F zroot/usr/src
2024-01-06.14:29:27 zfs receive -F zroot/usr/ports
2024-02-01.18:27:32 zfs set -u mountpoint=none zroot/tmp
2024-02-23.20:29:42 zfs create zroot/bhyve/plan9
2024-03-22.13:59:57 zfs create zroot/bhyve/Debian
2024-04-14.16:03:47 zfs snapshot -r zroot/bhyve/plan9@d033f6b6
2024-04-14.16:03:47 zfs clone zroot/bhyve/plan9@d033f6b6 zroot/bhyve/386plan9
2024-07-20.20:45:22 zfs create zroot/bhyve/freebsd15
2024-07-21.18:36:18 zfs destroy zroot/ROOT/14.0-RELEASE-p1_2024-01-06_133817
2024-07-21.18:37:07 zfs destroy zroot/ROOT/14.0-RELEASE-p4_2024-03-22_210204
 
Last edited by a moderator:
[...] because I can import it (with -f - /altroot), but only if -o readonly=on.
What is the output from this?
What is the exact version of FreeBSD you're running when issuing this import command?

I suggest you try exporting it explicitly with zpool-export(8) after having successfully imported the pool. Then try importing it again without readonly.
 
What is the output from this?
What is the exact version of FreeBSD you're running when issueing this import command?

I suggest you try exporting it explicitly with zpool-export(8) after having successfully imported the pool. Then try importing it again without readonly.
I append several zpool actions on this pool with the outcome. Probably, exporting a readonly imported pool will not be changed, so the outcome of the last line is expected.
Code:
root@wbk1:~ # zpool import -R /althome
   pool: zroot
     id: 13989299616962548941
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

    zroot       ONLINE
      nda0p4    ONLINE
root@wbk1:~ # zpool import -R /althome zroot
cannot import 'zroot': pool was previously in use from another system.
Last accessed by wbk1 (hostid=0) at Fri Jul 26 21:59:31 2024
The pool can be imported, use 'zpool import -f' to import the pool.
root@wbk1:~ # zpool import -f -R /althome zroot
cannot import 'zroot': no such pool or dataset
    Destroy and re-create the pool from
    a backup source.
root@wbk1:~ # zpool import -f -o readonly=on -R /althome zroot
root@wbk1:~ # zpool export
missing pool argument
usage:
    export [-af] <pool> ...
root@wbk1:~ # zpool export zroot
root@wbk1:~ # zpool import -R /althome zroot
cannot import 'zroot': pool was previously in use from another system.
Last accessed by wbk1 (hostid=0) at Fri Jul 26 21:59:31 2024
The pool can be imported, use 'zpool import -f' to import the pool.
root@wbk1:~ #
Accessing the files after importing readonly is fine. For instance, I needed the amdgpu modules from the other system to get X11 on my alternative system working.

Code:
root@wbk1:~ # uname -a
FreeBSD wbk1 14.1-RELEASE FreeBSD 14.1-RELEASE releng/14.1-n267679-10e31f0946d8 GENERIC amd64
root@wbk1:~ #
 
Last edited by a moderator:
Perhaps somebody is able to help when I add tho following info on my pool:
Code:
root@wbk1:~ # zdb -l /dev/nda0p4
------------------------------------
LABEL 0 
------------------------------------
    version: 5000
    name: 'zroot'
    state: 0
    txg: 1360294
    pool_guid: 13989299616962548941
    errata: 0
    hostname: 'wbk1'
    top_guid: 10004561385745542149
    guid: 10004561385745542149
    vdev_children: 1
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 10004561385745542149
        path: '/dev/nda0p4'
        whole_disk: 1
        metaslab_array: 256
        metaslab_shift: 31
        ashift: 12
        asize: 253634281472
        is_log: 0
        create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
        com.klarasystems:vdev_zaps_v2
    labels = 0 1 2 3

Is there a way to find out why the pool can be imported readonly but not rw?
 
Last edited by a moderator:
For me, this thread may be closed. I capitulated, destroyed the pool and recreated it from backup.
Even destroying the pool was not easy, because zpool-destroy insisted that it did not even exist.

I hope that the new setup will last longer than the 7 months since last time.
 
For future reference:

If the disk is accessed from multiple systems directly then make sure they all have ZFS upgraded to the same version and have the same supported features before upgrading a pool. Otherwise enable proper compatibility mode (`man 7 zpool-features` section 'Compatibility feature sets'), enable features one by one, or enable them from a system with no additional features than all the other systems. Setting the compatibility state for the pool avoids accidentally upgrading beyond what desired systems support.

If you messed up and activated a feature that is unsupported by a system, some features can be changed back to a non-active state after disabling its use and removing/replacing any data structures using it; this is not available for all features and some cannot ever be undone once enabled. Some features are only needed for read+write access so you 'may' have read only access if you mess up and are lucky.

Once messed up, you fix such compatibility issue by upgrading the ZFS code on the system that cannot read it, rolling back the pool to a previously created checkpoint, or by restoring from backup. For FreeBSD you can either upgrade to a newer version of FreeBSD including that code or you can install sysutils/openzfs (most reliable to custom build your own from ports as there is a kernel module sysutils/openzfs-kmod involved) when it is updated enough to support the needed features.

If you boot from a ZFS pool, you will also need your boot loader to be updated when pool features are being added that the old boot loader did not know about. Using a compatible boot loader does not upgrade the compatibility of the operating system's kernel module that is used when the system is running so they are two separate pieces that should be thought about as such.

Keep in mind if you want to roll back to an older OS/ZFS version (boot environments, load a backup, etc.) that the older system needs compatibility with the pool's upgraded features. Don't activate new features if you think you will need to use the pool with any ZFS code that is not yet compatible.

If you know your next move may break something, create a checkpoint; `man 8 zpool-checkpoint` for details how to use. A checkpoint marks the pool's state at a point in time in a way that it can be undone. Its kind of like a snapshot but is for the entire pool and includes its data structures such as those that get altered by zpool-upgrade. Once you are satisfied that things work as expected, you likely want to remove the checkpoint as all changes to the pool's data before the checkpoint are additions so space is negatively impacted and several zpool subcommands aren't available while a checkpoint exists. Rewinding to a checkpoint undoes 'all' new changes to the pool so new data will be lost just as new settings, pool features, etc. will be lost; make sure you have a backup of data altered after the checkpoint if you will still need it. Only 1 checkpoint exists on a pool so so make a new one you have to destroy the old one.
 
Back
Top