tl;dr: Quite a few alternative ways of configuring a 10-disk file server using ZFS for data storage. Some discussion on adventages/disadventages of the alternatives, and currently wants help in deciding which alternative is the better for the described situation.
Goal of this thread: Air my thoughts around this task, hopefully getting other peoples thoughts on the matter, with the ultimate goal of letting people whom are going through this process have one more source of well-reasoned argumentation for/against various alternatives for the provided situation.
I'm going to set up a new file server in the near future.
Drive bays: 10 (6 internal + 4 in a drive bay fitting in the 5.25" slots)
RAID controllers: On-board (ICH10, 6p s-ata) + HighPoint RocketRAID 2320 (PCI-Express 4x, 8p s-ata)
RAM: 8GB
I'm going for 10x 1.5TB drives, and am trying to find the 'best' balance between data redundancy and capacity.
Considered configuration alternatives
FreeBSD installed on data pool:
(A) RaidZ-2 of 5 drives + RaidZ-2 of 5 drives, 0 hotspares. 60% data capacity. Will be keeping a number of 'cold-spare' drives. The file server will be located in my home, so the time it takes to replace a drive with a 'cold spare' should be minimal.
(B) RaidZ-2 of 9 drives + 1 hot spare. 70% data capacity.
(C) RaidZ-2 of 6 drives + RaidZ-1 of 3 drives + 1 hot spare. 60% data capacity.
FreeBSD not installed on data pool:
(D) Install OS/applications/etc on a 2-disk mirror, adding 7 drives to RaidZ-2, with 1 hotspare for the RaidZ-2 vdev. (data pool gets 50% data capacity)
(E) Install OS on a mirror of 2 USB keys, and use RaidZ-2 of 9 drives + 1 hotspare (70% data capacity)
(F) Install OS on a mirror of 2 USB keys, and use 2x RaidZ-2 of 5 drives in a single pool. 0 hotspares. (60% data capacity)
Arguments for/against
Alternative A-C would cause OS write/read operations to affect the performance of the data storage pool. However, since this is a home file server, the redundancy + amount of storage is more important than minor performance loss.
Alternative D-F might allow for simpler reinstallation of OS since the data pool can be marked/considered 'no-touch' untill the host OS is up and running, thus reducing the risk of human error upon such an operation.
Alternative E&F will let me use all 10 drive bays for the data pool, and keep the OS off of the data pool.
Alternative A seems to be the most redundant alternative, but it has the worse capacity and has no hotspare.
Alternative B & E allows for any two drives to fail at the same time. If reslivering finishes in time, allows for a total of three drives failing in succession w/o recieving attention from system admin.
Alternative C doesn't feel much better than B; Possibly because of the risk of things stored on the RAIDZ-1 being only two disk crashes away from the void.
Alternative D will however reduce data pool total size by 20 'percent points'. The available storage loss will be somewhat less than this, since the base OS is not installed on the 'data' pool.
Alternative F allows for any two drives to fail at any time. With a bit of luck, four drives may fail at the same time (two in each vdev) without data loss. No automatic reslivering due to lack of hotspares.
Questions:
I'm landing on A for redundancy or B for capacity; But I feel very unsure about which to pick, and even on my 'measurement' of redundancy. I basically need more pros/cons before I can make a decision; Any input is welcome.
If a single disk fail in a RaidZ-2, will the pool become unavailable until reslivering is done?
Any thoughts or suggestions?
Goal of this thread: Air my thoughts around this task, hopefully getting other peoples thoughts on the matter, with the ultimate goal of letting people whom are going through this process have one more source of well-reasoned argumentation for/against various alternatives for the provided situation.
I'm going to set up a new file server in the near future.
Drive bays: 10 (6 internal + 4 in a drive bay fitting in the 5.25" slots)
RAID controllers: On-board (ICH10, 6p s-ata) + HighPoint RocketRAID 2320 (PCI-Express 4x, 8p s-ata)
RAM: 8GB
I'm going for 10x 1.5TB drives, and am trying to find the 'best' balance between data redundancy and capacity.
Considered configuration alternatives
FreeBSD installed on data pool:
(A) RaidZ-2 of 5 drives + RaidZ-2 of 5 drives, 0 hotspares. 60% data capacity. Will be keeping a number of 'cold-spare' drives. The file server will be located in my home, so the time it takes to replace a drive with a 'cold spare' should be minimal.
(B) RaidZ-2 of 9 drives + 1 hot spare. 70% data capacity.
(C) RaidZ-2 of 6 drives + RaidZ-1 of 3 drives + 1 hot spare. 60% data capacity.
FreeBSD not installed on data pool:
(D) Install OS/applications/etc on a 2-disk mirror, adding 7 drives to RaidZ-2, with 1 hotspare for the RaidZ-2 vdev. (data pool gets 50% data capacity)
(E) Install OS on a mirror of 2 USB keys, and use RaidZ-2 of 9 drives + 1 hotspare (70% data capacity)
(F) Install OS on a mirror of 2 USB keys, and use 2x RaidZ-2 of 5 drives in a single pool. 0 hotspares. (60% data capacity)
Arguments for/against
Alternative A-C would cause OS write/read operations to affect the performance of the data storage pool. However, since this is a home file server, the redundancy + amount of storage is more important than minor performance loss.
Alternative D-F might allow for simpler reinstallation of OS since the data pool can be marked/considered 'no-touch' untill the host OS is up and running, thus reducing the risk of human error upon such an operation.
Alternative E&F will let me use all 10 drive bays for the data pool, and keep the OS off of the data pool.
Alternative A seems to be the most redundant alternative, but it has the worse capacity and has no hotspare.
Alternative B & E allows for any two drives to fail at the same time. If reslivering finishes in time, allows for a total of three drives failing in succession w/o recieving attention from system admin.
Alternative C doesn't feel much better than B; Possibly because of the risk of things stored on the RAIDZ-1 being only two disk crashes away from the void.
Alternative D will however reduce data pool total size by 20 'percent points'. The available storage loss will be somewhat less than this, since the base OS is not installed on the 'data' pool.
Alternative F allows for any two drives to fail at any time. With a bit of luck, four drives may fail at the same time (two in each vdev) without data loss. No automatic reslivering due to lack of hotspares.
Questions:
I'm landing on A for redundancy or B for capacity; But I feel very unsure about which to pick, and even on my 'measurement' of redundancy. I basically need more pros/cons before I can make a decision; Any input is welcome.

Any thoughts or suggestions?