Solved ZFS not respecting labels across reboots

I've got FreeBSD 10.1 installed on a ZFS array. I intended to use labels which reference the position of the disks in the chassis. When I built the array the labels worked, but after a reboot these reverted to the gptid-numbers. I've tried failing, replacing and removing disks, but ZFS insists on reverting to gptid.

I've formatted and labelled the disks as such. The commands were cobbled together from various sources when I installed my 10.0 server last year, so I just copied them from the reference manual I wrote for myself:
Code:
atlas# gpart destroy -F /dev/ada1                               
ada1 destroyed
atlas#  gpart create -s gpt ada1                          
ada1 created
atlas# gpart add -s 100 -a 4k -t freebsd-boot -l boot0 ada1
ada1p1 added
atlas# gpart add -a 4k -t freebsd-zfs -l disk0 ada1             
ada1p2 added
atlas# gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1
bootcode written to ada1
atlas# gnop create -S 4096 /dev/gpt/disk0                       
atlas# glabel label R1D1 /dev/gpt/disk0.nop

Which I've then added to the array (in this case a replacement disk): atlas# zpool replace system /dev/gptid-bla label/R1D1. There's the usual re-silver, but the end result is:
Code:
  NAME                                            STATE     READ WRITE CKSUM

    system                                          ONLINE       0     0     0

      raidz1-0                                      ONLINE       0     0     0

        label/R1D1                                  ONLINE       0     0     0

        gptid/91d29b30-6509-11e4-aad4-00259086be9a  ONLINE       0     0     0

And after a reboot:
Code:
    NAME                                            STATE     READ WRITE CKSUM

    system                                          ONLINE       0     0     0

      raidz1-0                                      ONLINE       0     0     0

        gptid/8d5ac7df-7d21-11e4-bf6a-00259086be9a  ONLINE       0     0     0

        gptid/91d29b30-6509-11e4-aad4-00259086be9a  ONLINE       0     0     0

I don't understand why it is doing this. For this pool it is just a cosmetic problem but I'm about to set up a new storage array, and I'd like persistent labels there.

Can someone tell me what I'm doing wrong, and how to avoid it for the new array? I know it is possible, as a similar (evidently not identical) setup works like a charm on my older server.
 
Usually, this is fixed by the export/import trick. First, do a zpool export mypool. Then do a zpool import -d /dev/label mypool.
 
  • Thanks
Reactions: Oko
Disable GPTIDs and UFSIDs in /boot/loader.conf:
Code:
kern.geom.label.gptid.enable="0"                # Disable the auto-generated GPT UUIDs for disks
kern.geom.label.ufsid.enable="0"                # Disable the auto-generated UFS UUIDs for filesystems

Then reboot to pick up the changes, and you'll stop having issues with GEOM labelling. :)
 
Disable GPTIDs and UFSIDs in /boot/loader.conf:
Code:
kern.geom.label.gptid.enable="0"                # Disable the auto-generated GPT UUIDs for disks
kern.geom.label.ufsid.enable="0"                # Disable the auto-generated UFS UUIDs for filesystems

Then reboot to pick up the changes, and you'll stop having issues with GEOM labelling. :)

That is a neat trick. That reverts to the GPT labels gpart -l though, not the glabels I've used. Even so, it is an improvement over the GPTID labels. Knowing this, I should've just set descriptive GPT labels. Something to keep in mind when I start swapping out disks.
 
Using glabel on top of GPT is pointless. GPT already has partition labels, so just use them.
 
Using glabel on top of GPT is pointless. GPT already has partition labels, so just use them.
I believe I initially used glabel because my root disks are partitioned with a separate /boot, so I got to label the disk rather than the partition. In hindsight that might not have been the cleanest approach.

Oh well. I've got human-readable labels and identifiable disks. That was the point.
I'll outgrow the glabels eventually. I'll stick to to these gpt labels and just change the stickers on the housing rather than do the whole dance again.
 
Or, just remove the glabels completely, and then redo you gpt labels to make them the same as the old glabels.

IOW, since you only have one partition in use in the pool, just make that the "disk label" that tells you where in the system that disk physically is.
 
I had assumed that it would be unwise to go mess with the partitions, but if you could point me to a safe-(ish) way to do that, I'd be grateful.
 
# man gpart
Look at the modify section. You'd want to use something like (untested, going from memory, should probably do this while booted from a rescue CD or mfsBSD or similar):
Code:
# glabel destroy R1D1
# gpart modify -i 2 -l R1D1 ada1
Once you've removed all the glabels and modified all the GPT labels, then you can try to import the pool using the new names:
Code:
# zpool import -d /dev/gpt system
After that, everything should Just Work across reboots.
 
Be careful. glabel(8) takes one block for metadata at the end of the device. ZFS does not use some mysterious amount of data at the end of a partition, but those partitions will be one block different in size. Will it work? Maybe. Should it be done without a full backup? No.
 
Thank you for the warning. I'll leave things as is. It isn't critical data, but screwing this up would be a pain.
I'll fix the labels as I outgrow the disks. Easy enough to do.
 
Back
Top