ZFS Restoring GPT Labels in ZFS Pools

I have a 3-way mirrored ZFS pool configured for backup purposes, alongside a ZFS-on-root pool that my system utilizes. All hard drives in the setup have GPT labels. Recently, I encountered an issue where ada0p1 was missing its label when I ran the zpool status command. Fortunately, I managed to recover the label using the following commands:
Code:
# zpool export tank
# gpart modify -i 1 -l wdc0 ada0  # (Not sure if this step was necessary)
# zpool import -d /dev/gpt tank

After executing these commands, the label has been successfully restored, and the output from zpool status is as follows:
Code:
# zpool status
  pool: tank
 state: ONLINE
  scan: resilvered 1.32M in 00:00:01 with 0 errors on Mon Sep 30 18:00:05 2024
remove: Removal of vdev 2 copied 1.63G in 0h0m, completed on Sun Sep 15 20:54:16 2024
        5.53K memory used for removed device mappings
config:

        NAME              STATE     READ WRITE CKSUM
        tank              ONLINE       0     0     0
          mirror-0        ONLINE       0     0     0
            gpt/wdc0      ONLINE       0     0     0
            gpt/toshiba0  ONLINE       0     0     0
            gpt/hgst0     ONLINE       0     0     0
As you can see, ada0p1 is now recognized as gpt/wdc0.

I would like to perform a similar operation for my zroot pool, which currently shows the following status:
Code:
# zpool status zroot
pool: zroot
 state: ONLINE
config:

        NAME          STATE     READ WRITE CKSUM
        zroot         ONLINE       0     0     0
          nda0p3.eli  ONLINE       0     0     0

The partition nda0p3 has a label of gpt/zfs0, but it's missing from the /dev/gpt directory:

Code:
# gpart show -l nda0
=>       40  500118112  nda0  GPT  (238G)
         40     204800     1  efiboot0   (100M)
     204840   67108864     2  swap0      (32G)
   67313704 432804448     3   zfs0      (206G)

My question is: how can I import the zroot pool with the -d parameter while the system is running? I would like to see the gpt/zfs0 label instead of the partition name in the output of zpool status. For reference, I am using GELI for the zroot pool.Here’s what’s currently listed in my /dev/gpt/ directory:


Code:
# ls /dev/gpt/
efiboot0 hgst0    toshiba0 wdc0

Thank you for your help!
 
I'm speculating here because I don't GELI my stuff at home.
Your pool on ndap03 is on nda0p3.eli so I think you're in a sticky position.
What you would have needed was to do the GELI stuff on /dev/gpt/zfs0, then use zfs0.eli to create the zpool.
So how to recover from that? I'm not sure.
I've done similar without GELI but have mirrored the boot device. That lets you detach one, then reattach using the gpt label, wait for resilver, detach the other, reattch using the gpt label.
You have a single device for your zroot, so maybe adding a device and mirror would let you do what you want.

As for what you have listed in/dev/gpt that because by using nda0p3.eli the gpt/zfs0 is "withered".
 
Thank you for your comment! I want to clarify that I did indeed use the command # geli init -bgd -s4096 -l256 gpt/zfs0 when initializing GELI. Additionally, I have a GELI backup file stored on my hard drives.
Code:
% file -s /mnt/tank/gpt_zfs0.eli
/mnt/tank/gpt_zfs0.eli: regular file, no read permission
Interestingly, my mirrored pool did not require a resilver in this case.

I suspect the issue arose because the GPT labels were not used while mounting the pools. When I imported the zpool by including the /dev/gpt directory, it correctly recognized the GPT label.
 
I attached one more drive to my zroot pool and made it a 2-way mirror to test if i can detach one at a time to import again with gpt label but it didn't last after a reboot. :(

I'll try without GELI.
 
I don't use GELI now and they are now using GPT labels.
Code:
% zpool status zroot
  pool: zroot
 state: ONLINE
  scan: resilvered 79.7G in 00:10:38 with 0 errors on Thu Oct  3 13:26:50 2024
config:

        NAME          STATE     READ WRITE CKSUM
        zroot         ONLINE       0     0     0
          mirror-0    ONLINE       0     0     0
            gpt/zfs1  ONLINE       0     0     0
            gpt/zfs0  ONLINE       0     0     0

errors: No known data errors
 
Interesting. I think GELI is generating the labels/devices and by default are "partition".eli Not sure if there is a way to change that.
 
I wonder if when you do geli init/attach if you used /dev/gpt/whateverlabel it would give you whateverlabel.eli that you could use. You could probably test that with a small usb drive or something. I'm speculating because I don't "GELI" anything
 
I wonder if when you do geli init/attach if you used /dev/gpt/whateverlabel it would give you whateverlabel.eli that you could use.
Yes, in installer's shell environment, i initialized geli with gpt label and it give me label.eli device but it didn't last long. after installation, on system launch, zroot was using disk name instead of GPT labels.
You could probably test that with a small usb drive or something. I'm speculating because I don't "GELI" anything
I don't use GELI anymore so I don't want to test. I think it's a downside to not be able to re-import zroot with -d /dev/whatever because when i ran it for the non-system tank zpool with -d /dev/gpt parameter, GPT labels were being used.

I switched to diskid's instead of GPT labels.

Everything is fine on my side now.

Code:
% zpool status zroot
  pool: zroot
 state: ONLINE
config:

        NAME                                    STATE     READ WRITE CKSUM
        zroot                                   ONLINE       0     0     0
          mirror-0                              ONLINE       0     0     0
            diskid/DISK-AA000000034556002022p3  ONLINE       0     0     0
            diskid/DISK-1642312009005194p3      ONLINE       0     0     0

errors: No known data errors

% zpool status tank
  pool: tank
 state: ONLINE
  scan: resilvered 1.32M in 00:00:01 with 0 errors on Mon Sep 30 18:00:05 2024
remove: Removal of vdev 2 copied 1.63G in 0h0m, completed on Sun Sep 15 20:54:16 2024
        5.53K memory used for removed device mappings
config:

        NAME              STATE     READ WRITE CKSUM
        tank              ONLINE       0     0     0
          mirror-0        ONLINE       0     0     0
            gpt/wdc0      ONLINE       0     0     0
            gpt/toshiba0  ONLINE       0     0     0
            gpt/hgst0     ONLINE       0     0     0

errors: No known data errors
 
I attached one more drive to my zroot pool and made it a 2-way mirror to test if i can detach one at a time to import again with gpt label but it didn't last after a reboot. :(

I'll try without GELI.
Interesting. I think GELI is generating the labels/devices and by default are "partition".eli Not sure if there is a way to change that.
I wonder if when you do geli init/attach if you used /dev/gpt/whateverlabel it would give you whateverlabel.eli that you could use. You could probably test that with a small usb drive or something. I'm speculating because I don't "GELI" anything
Even when GPT labels were used to initialize the geli(8) providers and the ZFS pool is created based on their <gpt>.eli label, using the -g or -b options will attach the geli(8) providers based on their device name, not their GPT label and import the ZFS pool based on those device names.

Perhaps the geli kernel module (geom_eli.ko) is hard coded to use device names. I couldn't find in the source code where the device attach happens (looked in src/lib/geom/eli/geom_eli.c mostly ).

I don't use GELI anymore so I don't want to test. I think it's a downside to not be able to re-import zroot with -d /dev/whatever
The only option I can think of, without tempering with the source code (assuming one knows which source code exactly that is), to get geli(8) encrypted root-on-ZFS pool display device GPT labels, is to set pool path every time on system reboot, i.e.:

/etc/rc.local
Code:
zpool  set  path=/dev/gpt/zfs0.eli  zroot  nda0p3.eli
For thread readers not familiar with zpool set path=..., a permanent setting of the zpool path does not survive a system reboot with geli(8) encrypted providers. See also Thread from-device-name-to-gpt-name.87147.

On the other hand, geli(8) none-root-on-ZFS pools can be geli attached by the device GPT label and ultimately the pool is imported by their device labels with settings in /etc/rc.conf. The device attach happends in this scenario by /etc/rc.d/geli script.

Here an example with a mirrored pool of three disks, the geli(8) providers are initialized with the same passphrase. Prerequisite is the providers are not initialized with the -b or -g option. If they are, the flags can be removed with the geli configure -B[-G] <provider> command option.

/etc/rc.conf
Code:
geli_groups="zfsmirror0"
geli_zfsmirror0_devices="gpt/wdc0 gpt/toshiba0 gpt/hgst0"
This will ask for a passphrase mid-boot.

If entering the passphrase is inconvenient, those encrypted providers can be alternatively initialized and attached with key file(s). The key file(s) can be on the encrypted root-on-ZFS file system (see /etc/defaults/rc.conf for geli example use and comments).
 
Interesting discussion. Isn't there a loader.conf setting if you are boot from GELI encryption? If so, perhaps that's what grabs the partition based devnode and causes the rest to wither.
 
Back
Top