glabel

Hi,

My understanding is that when using glabel the label is being written in the last sector of the disk

Code:
glabel label zdisk1 /dev/ada1
glabel label zdisk2 /dev/ada2
glabel label zdisk3 /dev/ada3

Assuming now that I create a ZFS pool from those disks with zpool create tank raidz label/zdisk1 label/zdisk2 label/zdisk3 what will happen when the disks are full up to the last sector? Will the label be overwritten and the pool corrupted?
 
GEOM devices do not make the metadata sector available. /dev/label/zdisk1 has one less sector available than /dev/ada1. That one sector is the metadata sector. Where people get into trouble is by creating a label or other GEOM metadata, then continuing to use the parent device.

That said, it is not necessary to use labels on ZFS disks. ZFS has its own metadata, and will be able to identify the disks of a pool if they are moved around.
 
There's a potential problem if ZFS forgets that the vdev components should use the label names and starts using the raw device names instead, however I think that the pool metadata will still use the sizes originally extracted from the labeled devices and there is no danger of overwriting the label metadata.
 
Personally I don't like the fact that both the original device, and the label are accessible, with the one being a sector smaller. ZFS stores vdev labels at the start and end of the disk so it's possible it could think the metadata is corrupt if it tries to use the raw device. It shouldn't cause any actual problems but I prefer to use gpt labels.

Code:
# gpart create -s gpt ada0
# gpart add -t freebsd-zfs -l zdisk1 ada0

You still end up with two devices - /dev/ada0p1 and /dev/gpt/zdisk1 but they're both identical and gpt is a standard where as glabel is just a FreeBSD thing.
 
usdmatt said:
Code:
# gpart create -s gpt ada0
# gpart add -t freebsd-zfs -l zdisk1 ada0

You still end up with two devices - /dev/ada0p1 and /dev/gpt/zdisk1 but they're both identical and gpt is a standard where as glabel is just a FreeBSD thing.

Yes, but in this case I have read in many places that ZFS will not use the HDD write cache in the case of partitions. Is this true? This seems to me a good reason to use raw devices.

https://forums.freebsd.org/showthread.php?t=19921
 
I am lost here. I have read in many many places that by using GPT for ZFS disks then the hard disk write cache is being disabled. However I have just found a post stating that this is not the case in FreeBSD and only in Solaris: https://forums.freebsd.org/showthread.php?t=19921

Also in the ZFSTuningGuide it is stated that: https://wiki.freebsd.org/ZFSTuningGuide. the caveat about only giving ZFS full devices is a 'solarism' that doesn't apply to FreeBSD. On Solaris write caches are disabled on drives if partitions are handed to ZFS. On FreeBSD this isn't the case.

I have seen so many different approaches for ZFS systems: GPT partitioned and raw devices.

What should I do? :)
 
There is no problem with using GPT labels or whole disks. The problem of varying disk sizes may be less than in the past, with vendors standardizing on the 1,000,000,000 bytes as a gigabyte measure. Later versions of ZFS disk labels are supposed to leave some space at the end of a disk, although I have not found out how much and how this works in practice when the new disk has a slightly different capacity. It could be tested using partitions of slightly different sizes.

Recommendation: if you want to use partitions and labels, use GPT and GPT labels. If you want to use whole disks, that works also, and later versions of ZFS are supposed to be able to deal with slightly different disk sizes.
 
sakoula said:
Yes, but in this case I have read in many places that ZFS will not use the HDD write cache in the case of partitions. Is this true? This seems to me a good reason to use raw devices.

https://forums.freebsd.org/showthread.php?t=19921

FreeBSD enables the caches on drives no matter how they are used. This includes using partitions with ZFS. This has been the case since the initial import of ZFS into FreeBSD way back in the FreeBSD 7.x and ZFSv6 days.

It's Solaris, and only Solaris, that has the "won't enable drive caches if using partitions with ZFS" issue.
 
Back
Top