Solved Why do I see diskid/DISK-WD instead of my labels for a pool?

I created raidz with labels and geoms /dev/ada1pX /dev/ada2pX /dev/ada3pX /dev/ada3pX as well as /dev/gpt/$labels existed before reboot.

It was like this:

Code:
  pool: hddPool_x3_RAIDZ
 state: ONLINE
config:

        NAME                                     STATE     READ WRITE CKSUM
        hddPool_x3_RAIDZ                   ONLINE       0     0     0
          raidz1-0                                    ONLINE       0     0     0
            gpt/hdd_for_x3_RAIDZ_1         ONLINE       0     0     0
            gpt/hdd_for_x3_RAIDZ_2        ONLINE       0     0     0
             gpt/hdd_for_x3_RAIDZ_3      ONLINE       0     0     0

However, after reboot I see the following:

Code:
  pool: hddPool_x3_RAIDZ
 state: ONLINE
config:

        NAME                               STATE     READ WRITE CKSUM
        hddPool_x3_RAIDZ                   ONLINE       0     0     0
          raidz1-0                         ONLINE       0     0     0
            gpt/hdd_for_x3_RAIDZ_1         ONLINE       0     0     0
            diskid/DISK-WD-WX22D51DKT1Dp8  ONLINE       0     0     0
            diskid/DISK-WD-WX22D51JJAERp6  ONLINE       0     0     0

I don't see any geoms of ada2 and ada3

Code:
 # ls  /dev/ada
ada1p10  ada1p3   ada1p6   ada1p9
 ada1     ada1p11  ada1p4   ada1p7   ada2
ada1p1   ada1p2   ada1p5   ada1p8   ada3

# gpart list ada2
gpart: Class 'PART' does not have an instance named 'ada2'.

Why did this happen after a reboot, how to recover and how to prevent this in the future?

FreeBSD 13.2
 
Did you modified your "loader.conf" ?
I.e. kern.geom.label... /gptid/gpt/disk_ident/...
Have a look at "zpool history hddPool_x3_RAIDZ" to see which "labels" you used to create the zpool.
 
I just had this happen to me with a root pool while installing 13.2-RELEASE. I tried zpool import -d ... <poolname> as suggested here, but it didn't work.

Interestingly, the ZFS install script disables kern.geom.label.disk_ident.enable and kern.geom.label.gptid.enable. However, fiddling with those knobs didn't solve the problem for me.

Maybe it had something to do with leaving the USB install stick plugged in when I first rebooted after install? Maybe I shoulda waited until the partitioning step to drop into shell instead of dropping to a shell at the start? I haven't been able to reproduce it. I found this interesting:
 
It has happened on one of the reboots. I don't use any scripts, just manually created partitions with labels and pools. label-related sysctls are not affected. Obviously, pool export/import will not restore geoms. Looks like a bug to me - a kind of partition table corruption, but the reason is unknown. Recover will not work:

Code:
# gpart recover ada2
gpart: arg0 'ada2': Invalid argument
# gpart recover ada3
gpart: arg0 'ada3': Invalid argument
# gpart recover ada1
ada1 recovering is not needed

Should I understand that the only solution is to start from scratch?
 
I had similar problems. The device nodes would disappear when I imported the pool and gpart show foo0 would error out with Invalid argument like it does for you. The nodes would reappear if I exported the pool. Strangely, fdisk(8) reported the correct slices and partitions.

I didn't use any scripts either. Those sysctls are supposed to prevent the use of the DISK-WD-... names you see.
 
Yeah, I suspect this is a bug, but it's very hard for me to reproduce. I hope someone who creates ZFS pools and datasets more often than I do will come up with a way to repro it.

I wound up rebuilding the pool from scratch. I managed to fix the path under vdev_tree in the ZFS label using the zpool import -d trick, but the machine continued to fail to boot with this error Mounting from zfs:zroot/ROOT/default failed with error 6.
 
Back
Top