[ZFS] new pool not using all disk space

I had a raidz pool consisting of 5 disks of 2TB each. The pool was created with nappit on OpenIndiana. I choose to replace this OS with FreeBSD and imported the pool.

Code:
# df -k
Filesystem  1024-blocks        Used      Avail Capacity  Mounted on
Data8g       3469929308          48 3469929260     0%    /Data8g
Data8g/bcpk  7651738208  4181808947 3469929260    55%    /Data8g/bcpk

I bought 2 more 2TB drives, deleted the pool and created a new raidz2 pool with 7 drives

Code:
# df -k
Filesystem  1024-blocks        Used      Avail Capacity  Mounted on
data         8887430544         288 8887430256     0%    /data

7651738208 blocks with 4 drives, should have resulted in 9564672760 blocks with 5 drives. However I only see 8887430544 blocks, which is 1.7TB per drive instead of the 1.8TB I expected.

I used the following commands to build the pool for each drive:
# gpart create -s gpt da1
# gpart add -t freebsd-zfs -l disk01 -b 2048 -a 4k da1
# gnop create -S 4096 /dev/gpt/disk01

I then created the pool, exported it, removed the .nop devices and imported the pool.

I am trying to figure out why I am missing 100MB per drive.
 
I do not know but I would guess no as I did not specify anything in particular (I will try to find what nappit uses as default). I do know that the ashift was 9 if that helps.
The drives are Samsung 2TB (mix of 203WI and 204UI).

on a separate note any issue using a 512 sector drive and using it with a 4096 sector size?
 
My understanding is that you will lose some space using the larger sector size from the start. If you haven't done anything yet you can try to re-create with the 512 byte sectors just to see what the result would be. The "ashift = 9" does mean you were using 512 byte sectors before. Are those drives models both Advanced Format drives? If so then your procedure would be the way to go. Chances are sticking with 4096 is the way to go anyway because with drives getting larger and larger you'll be using it in the future for sure.
 
Back
Top