ZFS Warning about leftover labels

Be careful when re-using disks that have had ZFS pool on them. I used a disk that was part of single disk bootable ZFS pool with a partition layout like this:

Code:
% gpart show ada0
=>       34  312581741  ada0  GPT  (149G)
         34       1024     1  freebsd-boot  (512K)
       1058       3038        - free -  (1.5M)
       4096   16777216     2  freebsd-swap  (8.0G)
   16781312  134217728     3  freebsd-ufs  (64G)
  150999040  161582735     4  freebsd-ufs  (77G)

The third partition was previously of type freebsd-zfs that contained a ZFS pool. I didn't think anything of it and just recreated the partition as freebsd-ufs and newfs(8)ed it to make an UFS filesystem on the partition and installed a 10.1-RELEASE system on it.

I then for testing added a second disk to the system and tried to tried to create a ZFS pool on it. Much to my surprise the ada0p3 partition still contained ZFS labels that were detected on probe. I tried just about everything non-destructive to get rid the labels but the only thing that worked was zpool labelclear -f ada0p3. Needless to say this destroyed the UFS filesystem on the partition. I was cautious enough to make a full backup before proceeding and restored the backup without problems.

The proper way to reuse the disk would have been destroying the pool with zpool destroy and then running zpool labelclear -f to be doubly sure before recreating the partition with the new UFS filesystem.
 
This is kind of a bummer for users using both UFS and ZFS partitions on a single disk. As you alluded to already, zpool labelclear -f destroys all types of label data as the data all resides on the same space on the disk. This makes it sometimes impossible for users to wipe and create a new pool on such a disk without backing up first(which of course should be done anyway...). I wish this was mentioned in the zpool(8) man page. There is a past discussion on the mailing list that explains it more if anyone is interested.
 
The problem is not limited to just ZFS but to any kind of metadata (gmirror(8) for example) that is written to some pre-chosen areas on a disk or a partition. The metadata is not accounted for anywhere in the partition tables, it just is there as completely disconnected piece of data until a device driver probes for it.

My solution to this problem would be annotations that could be attached to the GPT partitions (MBR is a lost cause, no use try to fix it). One of the annotations could be (in some formal syntax) "this partition has ZFS metadata on it" and those annotations would be added and cleared when appropriate by the tools and device drivers that operate on the metadata. Of course the problem with GPT partitioning is that it's not designed to hold too much information, these annotations wouldn't fit in the current GPT specification.
 
I forgot to mention that the way I was able to verify the existence of the ZFS labels on the partition was with zdb -l /dev/ada0p3. For other types of metadata you might not be so lucky that there are such handy diagnostic tools.
 
Back
Top