Hello,
I made a raidz of 6x500GB disks. As far as I know, raidz is very much like a RAID5 configuration. The capacity and performance and all that stuff. It is also supposed to have fault tolerance 1 disk.
What is really strange about this array is the disk size. Check this out:
6 disks of 500GB each. This GPT/disk is like this, because the boot loader is on it and it has GPT table and very small boot partition.
The hard disks:
And at the end, the zpool size:
Well, I am not very good with the math, but if there is a fault tolerance 1 disk: 3TB - 0.5TB = 2.5TB (This doesn't even include the journal records necessary to recover the failed disk).
And my question is: how can that thing be 2.72TB and can it really sustain 1 disk failure without losing data or damage the array?
Thank you.
I made a raidz of 6x500GB disks. As far as I know, raidz is very much like a RAID5 configuration. The capacity and performance and all that stuff. It is also supposed to have fault tolerance 1 disk.
What is really strange about this array is the disk size. Check this out:
Code:
root@datacore:/root # zpool status
pool: zroot
state: ONLINE
scan: scrub repaired 0 in 1h15m with 0 errors on Wed Mar 6 16:42:42 2013
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
gpt/disk ONLINE 0 0 0
ada0 ONLINE 0 0 0
ada2 ONLINE 0 0 0
ada3 ONLINE 0 0 0
ada4 ONLINE 0 0 0
ada5 ONLINE 0 0 0
errors: No known data errors
6 disks of 500GB each. This GPT/disk is like this, because the boot loader is on it and it has GPT table and very small boot partition.
The hard disks:
Code:
root@datacore:/root #
root@datacore:/root # camcontrol devlist
<Hitachi HUA721050KLA330 GK6OA74A> at scbus3 target 0 lun 0 (pass0,ada0)
<ST3500320AS SD15> at scbus3 target 1 lun 0 (pass1,ada1)
<Hitachi HUA721050KLA330 GK6OA74A> at scbus4 target 0 lun 0 (pass2,ada2)
<Hitachi HDS721050DLE630 MS1OA600> at scbus4 target 1 lun 0 (pass3,ada3)
<Hitachi HUA721050KLA330 GK6OA74A> at scbus5 target 0 lun 0 (ada4,pass4)
<Hitachi HUA721050KLA330 GK6OA74A> at scbus6 target 0 lun 0 (pass5,ada5)
root@datacore:/root #
root@datacore:/root # diskinfo /dev/ada*
/dev/ada0 512 500107862016 976773168 0 0 969021 16 63
/dev/ada1 512 500107862016 976773168 0 0 969021 16 63
/dev/ada1p1 512 48128 94 0 17408 0 16 63
/dev/ada1p2 512 500107779584 976773007 0 65536 969020 16 63
/dev/ada2 512 500107862016 976773168 0 0 969021 16 63
/dev/ada3 512 500107862016 976773168 4096 0 969021 16 63
/dev/ada4 512 500107862016 976773168 0 0 969021 16 63
/dev/ada5 512 500107862016 976773168 0 0 969021 16 63
root@datacore:/root #
And at the end, the zpool size:
Code:
root@datacore:/root # zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
zroot 2.72T 280G 2.45T 10% 1.00x ONLINE -
root@datacore:/root #
root@datacore:/root # zfs list
NAME USED AVAIL REFER MOUNTPOINT
zroot 238G 1.99T 2.57G /
zroot/DataCore 9.91G 1.99T 9.91G -
zroot/DataCore2 221G 1.99T 221G -
zroot/swap 4.13G 1.99T 9.65M -
root@datacore:/root #
Well, I am not very good with the math, but if there is a fault tolerance 1 disk: 3TB - 0.5TB = 2.5TB (This doesn't even include the journal records necessary to recover the failed disk).
And my question is: how can that thing be 2.72TB and can it really sustain 1 disk failure without losing data or damage the array?
Thank you.