HDD-manufacturer induced ZFS limitations

Hi:

PROBLEM

I came across a very weird situation because HDD-manufacturer counts HDD space according to metric system instead of multiplication of 1024 (2^10). This mismatches the space capacity between what OS filesystems calculate HDD space.

Secondly, each HDD manufacturer has different total byte size for what they claim as the same sized HDDs.

With the above two scenarios, it poses a nuisance to replace a corrupted HDD with another HDD from another manufacturer even though they claim that they are of the equal size, but differs by a few bytes. Thus, zfs does not accept the new HDD.

SOLUTION

A way around is to create a sparse zfs dataset at the beginning, according to the HDD-manufacturer defined metric system of HDD space measurement at the power of 10 rather than 2^10.

BUT, ANOTHER PROBLEM

However, I came across another odd situation with the wonderful bsdinstaller with FreeBSD 10-Beta2** which has an option to install root in zfs. The bsdinstaller script is flawless as it is. But it does not address the disksize mismatch problem as stated above. Any workaround will be appreciated!

It would be a nice feature if the new bsdinstaller in FreeBSD10 allows the user first to make a sparse zfs dataset before installing root in zfs. Thanks!

/z

**I posted this to get some opinions and advice from expert FreeBSDers here on the topic though I know that this is not the forum for CURRENT branches. ;-)
 
metric/binary disparities can be a serious "gotcha" if you didn't account for them; but they are really easy to account for. You don't, and shouldn't, create an arbitarily sized ZFS partition at the beginning of the disk. The easiest way to account for metric/binary differences, is to check that the disk you're buying is indeed large enough by checking the real byte count.

The recommended partitioning procedure is to align the partition to the closest 1 MiB position on the disk (performance reasons; this can be achieved with the -m flag to gpart(8), more info here), and to size this partition so that you leave a little empty space at the end of the disk.

If you're accounting for different number of blocks, it should be enough to leave a few MB free space at the end of the disk as @Toast says. However, the difference between 1 TiB (1024^4) and 1 TB (1000^4 - metric) is 99 511 627 776 bytes, which is close to 92 GiB or 100 GB. Plan accordingly.
 
Last edited by a moderator:
Savagedlight said:
metric/binary disparities can be a serious "gotcha" if you didn't account for them; but they are really easy to account for. You don't, and shouldn't, create an arbitarily sized ZFS partition at the beginning of the disk. The easiest way to account for metric/binary differences, is to check that the disk you're buying is indeed large enough by checking the real byte count.

The recommended partitioning procedure is to align the partition to the closest 1 MiB position on the disk (performance reasons; this can be achieved with the -m flag to gpart(8), more info here), and to size this partition so that you leave a little empty space at the end of the disk.

If you're accounting for different number of blocks, it should be enough to leave a few MB free space at the end of the disk as @Toast says. However, the difference between 1 TiB (1024^4) and 1 TB (1000^4 - metric) is 99 511 627 776 bytes, which is close to 92 GiB or 100 GB. Plan accordingly.

It is impossible to check the bytesize of a HDD these days as all production-grade HDDs are ordered online. Relaying to and fro from the vendor would be a very tedious and slow process (logistics part), merely to check the bytesizes.

Therefore, what I started doing after my bad experience is that like I stated I make a sparse zfs dataset in the beginning of the disk of metric TB (1000^4). And it is working wonders.

However, with new bsdinstaller that came with FreeBSD10-Beta2 which has a capability to install root on zfs in mirror or mirror-stripe with encryption needed, but does not provide an option to create sparse dataset at par with metric (1000^4), instead it uses the entire disk, meaning the default that comes with HDD i.e (1024^4).

The latter is what I would like to overcome with some inputs frome forum members. Thanks!
 
Last edited by a moderator:
Later versions of ZFS are supposed to leave "some" unused space at the end of a drive to allow for small variations in nominal drive size. This would only happen if the pool was created on the later version, though. (I don't know how much space is left open, and have so far been unable to verify it. Pointers welcome.)
 
Back
Top