I want to make raidz2 array from 12 6TB HDDs. But iI see strange free space behavior with raidz2:
raidz1
65T/12*11=59.58T
ZFS shows 56T avail.
6% difference
raidz2
65T/12*10=54.16T
ZFS shows 48T avail.
12% difference
raidz3
65T/12*9=48.75T
ZFS shows 45.8T avail.
6% difference
I have another system with 13 4TB hdd in raidz2
47.2/13*11=39.93T
ZFS shows (33.9+3.65) 37.55T
6% difference
Every pools have ashift=12.
Can someone explain, whyiI have so big difference with 6TB raidz2? May be iI can read some documentation about it? I mean how to determine how many space will be reserved for ZFS? Google does not help much. Only some notes about 1/64 (1,5%) of raw space. But even stripe shows
3% (2T), not 1.5%!
Code:
admin@server:/usr/home/admin# zpool destroy data
admin@server:/usr/home/admin# zpool create data raidz /dev/gpt/disk0 /dev/gpt/disk1 /dev/gpt/disk10 /dev/gpt/disk11 /dev/gpt/disk2 /dev/gpt/disk3 /dev/gpt/disk4 /dev/gpt/disk5 /dev/gpt/disk6 /dev/gpt/disk7 /dev/gpt/disk8 /dev/gpt/disk9
admin@server:/usr/home/admin# zpool list data
NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
data 65T 448K 65,0T 0% - 0% 1.00x ONLINE -
admin@server:/usr/home/admin# zfs list data
NAME USED AVAIL REFER MOUNTPOINT
data 427K 56,0T 171K /data
65T/12*11=59.58T
ZFS shows 56T avail.
6% difference
Code:
admin@server:/usr/home/admin# zpool destroy data
admin@server:/usr/home/admin# zpool create data raidz2 /dev/gpt/disk0 /dev/gpt/disk1 /dev/gpt/disk10 /dev/gpt/disk11 /dev/gpt/disk2 /dev/gpt/disk3 /dev/gpt/disk4 /dev/gpt/disk5 /dev/gpt/disk6 /dev/gpt/disk7 /dev/gpt/disk8 /dev/gpt/disk9
admin@server:/usr/home/admin# zpool list data
NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
data 65T 672K 65,0T 0% - 0% 1.00x ONLINE -
admin@server:/usr/home/admin# zfs list data
NAME USED AVAIL REFER MOUNTPOINT
data 548K 48,0T 219K /data
65T/12*10=54.16T
ZFS shows 48T avail.
12% difference
Code:
admin@server:/usr/home/admin# zpool destroy data
admin@server:/usr/home/admin# zpool create data raidz3 /dev/gpt/disk0 /dev/gpt/disk1 /dev/gpt/disk10 /dev/gpt/disk11 /dev/gpt/disk2 /dev/gpt/disk3 /dev/gpt/disk4 /dev/gpt/disk5 /dev/gpt/disk6 /dev/gpt/disk7 /dev/gpt/disk8 /dev/gpt/disk9
admin@server:/usr/home/admin# zpool list data
NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
data 65T 896K 65,0T 0% - 0% 1.00x ONLINE -
admin@server:/usr/home/admin# zfs list data
NAME USED AVAIL REFER MOUNTPOINT
data 698K 45,8T 279K /data
65T/12*9=48.75T
ZFS shows 45.8T avail.
6% difference
I have another system with 13 4TB hdd in raidz2
Code:
admin@server:/usr/home/admin# zpool list data
NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
data 47.2T 41.3T 5.91T - - 87% 1.00x ONLINE -
admin@server:/usr/home/admin# zfs list data
NAME USED AVAIL REFER MOUNTPOINT
data 33.9T 3.65T 374K /data
ZFS shows (33.9+3.65) 37.55T
6% difference
Every pools have ashift=12.
Can someone explain, why
Code:
admin@server:/usr/home/admin# zpool create data /dev/gpt/disk0 /dev/gpt/disk1 /dev/gpt/disk10 /dev/gpt/disk11 /dev/gpt/disk2 /dev/gpt/disk3 /dev/gpt/disk4 /dev/gpt/disk5 /dev/gpt/disk6 /dev/gpt/disk7 /dev/gpt/disk8 /dev/gpt/disk9
admin@server:/usr/home/admin# zpool list data
NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
data 65,2T 296K 65,2T 0% - 0% 1.00x ONLINE -
admin@server:/usr/home/admin# zfs list data
NAME USED AVAIL REFER MOUNTPOINT
data 240K 63,2T 96K /data
Last edited: