ZFS and other FS labeling schemes

Just out of curiosity, does ZFS in FreeBSD leave some buffered space at the beginning and ending of devices, even if given full devices? I would imagine a little slop could go a long way toward using drives with very slightly different sizes in the same raidz vdev. I've also noticed that when creating a zpool on drives that used to have an ext3 partition, that giving ZFS the whole device doesn't seem to clear the ext label. FreeBSD still picks up on it, should I be worried about this? I'm not at the moment since I don't care if the label info does get overwritten at some point, and am not ever going to accidentally mount the drive as ext3.
 
ZFS uses a modified GPT on the disk to store metadata about which vdev(s) in which pool(s) the disk belongs. The import command reads this metadata to determine pool layout, even if the disk device node name/number changes.

ZFSv15 (or thereabouts) added a feature where the vdev metadata leaves about 1 MB of slack space at the end of the device. This allows drives that are the same "size" but with different numbers of sectors/blocks to be used in the same vdev. Prior to this, if the absolute sector counts were different disks could not be added to the vdev.

Ideally, you would want to delete any MBR/GPT on the disk before adding it to a ZFS vdev. gpart(8) has the "destroy" and "-F" options for doing this.
 
Back
Top