ZFS Pool showing incorrect size

Since receiving a couple of disks I have now created what should become my final file server, and with that I have created a ZFS pool by running zpool create server raidz2 /dev/da{0..3}.eli /dev/ada{0..7}.eli and the pool is created, but when I check df -h on /server, it only shows it as 80TB. Each disk is 10TB, whereas diskinfo shows it as 9.1TB. With 12 disks in a raidz2 configuration I should at least have 91TB of free space instead of 80? Am I correct or am I missing something? zpool status shows no errors. If there is an error here, how do I debug it?

Tried creating raidz and raidz3 with the disks and those gave me 94T and 77T respectively.
 
Bobi B. What is recommended to use for checking against ZFS? zfs list shows the same value. I followed the commands to create my encrypted pool from this blogpost: https://www.daveeddy.com/2015/12/04/zfs-zpool-encryption-with-geli-on-freebsd/, and none of the commands showed anything out of the ordinary.

SirDice Yes, I am aware, but it still doesn't add up. A bit of reading tells me there is insane overhead used by raidz2 though. Could it be because I use an awkward amount of disks? I recall reading a year back or so that it could be more efficient, space-wise, to use a power-of-two number of disks.
 
I recall reading a year back or so that it could be more efficient, space-wise, to use a power-of-two number of disks.
With a bigger number of disks the size difference would get smaller, yes. It's going to matter if you use 4 disks (with 2 for fault-tolerance; 2 effectively usable) or 10 disks (2 fault-tolerance; 8 usable). It doesn't necessarily have to be a power of two, it's perfectly fine to have 5 or 7 disks for example. If I recall correctly the "magic" number of ideal disks in a vdev is 6 (which isn't a power of two). But this has more to do with performance than size.
 
Back
Top