Solved RAID-Z1 or Z2

I decided to improve the data storage at home. I purchased four WD red drives, 2 TB each.
After reading up on all opinions on the net I'm unsure whether I should go for Z1 with three drives and keep one as a spare or do Z2 instead.
Pro Z1 - less power consumption, the spare drive will not wear.
Pro Z2 - more redundancy.
I do have backed up all my real important data on archive grade DVD's anyhow, and I keep doing it.
Thoughts?
 
A general rule of thumb regarding the number of disks per vdev and RAIDZ:

RAIDZ1 vdevs should have 3, 5, or 9 devices in each vdev
RAIDZ2 vdevs should have 4, 6, or 10 devices in each vdev
RAIDZ3 vdevs should have 5, 7, or 11 devices in each vdev

In your case, I would certainly go with RAIDZ2.
 
Matt Ahrens has mostly dismissed the idea of using a specific number of disks depending on number of parity disks.
A misunderstanding of this overhead, has caused some people to recommend using “(2^n)+p” disks, where p is the number of parity “disks” (i.e. 2 for RAIDZ-2), and n is an integer. These people would claim that for example, a 9-wide (2^3+1) RAIDZ1 is better than 8-wide or 10-wide. This is not generally true.

The reason the industry has mostly moved to RAID6/Z2 now is that with larger arrays and bigger disks, the chance of an error occurring after a single disk fail, and before rebuild has finished, is reasonably high. Especially is cases where replacement may not happen automatically the instant a disk fails (such as with ZFS on FreeBSD currently).

Personally I wouldn't look at it from a power point of view, the extra disk won't make a massive difference. The question is whether you want to be in position where your array is 'critical' after a disk fail. With RAID-Z1, in the time between a disk failing and you starting & completing the resilver, any error is permanent, requiring you to remove affected files (and any snapshots referencing them). Obviously you have a backup (as you should) so it's not the end of the world, just an annoyance if you have errors, or even another failure, during the resilver.

If you were using all disks in both arrays, it would be a question of do you want the extra redundancy more, or the extra space. As you are only going to use 3 disks for the RAID-Z1, giving your 2x2 TB of usable space in both cases, I think I would just go for RAID-Z2.

The only other concern that could affect you, is that with all disks being exactly the same and purchased together, you could argue there's a slightly higher chance that more than one may fail at around the same time.
 
Think of RAID-Z2 as RAID-Z1 with a spare, except that the spare is continuously updated with an extra "parity".
So the only two advantages of RAID-Z1 + spare is: (a) lower power consumption, as the spare is idle. (b) less wear on the disk.

About (a): The power consumption of a disk changes when it is active. But the change is not massive, a few watts. Compared to the total power consumption of your system, this is minor. About (b): Home systems don't wear out disks, and it is not even clear whether regular use is good or bad for disks.

In exchange for a little power consumption, and perhaps a little more wear (or not), you get one extra redundancy. And that is very important. Given the size of modern disks, and the hard error rate, it is quite likely that if a disk fails, during the rebuild a bad sector is found. On a 1-fault-tolerant RAID, that is THE END. Matter-of-fact, a few years ago, a big shot from NetApp (I think the CEO or CTO) claimed that selling 1-fault-tolerant RAID today amounts to professional malpractice.

Having said that ... my home server only uses 2 disks, which are mirrored. But: it also has backups that are never older than an hour, and a second set of backups that are offsite.
 
FreeNAS folks typically recommend use of RAID-Z2 or RAID-Z3 if higher degree of safety is needed (obviously using RAID-Z* is not a back up strategy and you should have one). RAID-Z1 according to the same group is considered obsolete something like people who use RAID use typically RAID 6 and consider RAID 5 obsolete. I read somewhere (I can't remember where) that read/write performance of RAID-Z1 on FreeBSD is better than that of RAID-Z2. Now that might have been before as FreeBSD version of ZFS is about as good as the ZFS gets outside of Oracle Solaris.

I use only RAID-Z2 on production machines and I do follow the rule

RAIDZ1 vdevs should have 3, 5, or 9 devices in each vdev
RAIDZ2 vdevs should have 4, 6, or 10 devices in each vdev
RAIDZ3 vdevs should have 5, 7, or 11 devices in each vdev

Another thing to notice is that IIRC FreeBSD implementation of ZFS doesn't support hot spare (please no flames I am not sure about this fact and hot spare is not the same as hot swap which is supported by FreeBSD depends on the hardware but some people/myself feel safer to power down machine). There are Holy wars between people who argue for and against hot spare. I kind buy more argument of people who are against it.
 
... About (b): Home systems don't wear out disks, and it is not even clear whether regular use is good or bad for disks.

I read up on this a little bit, and that statement may actually be false. On older disk drives, reading or writing does not cause wear on the disk drive (obviously that assumes a benign environment; if writes fail due to vibration or power supply problems, they will cause damage, for example due to torn writes).

With the most modern disk drives, this is no longer true. These drives now reduce the fly height of the head while writing, to the point where they are getting dangerously close to the surface. This means that every write now carries an increased risk of the head hitting the "moguls" on the lubricant that covers the surface. And that is bad because this means that the head can become coated with lubricant, or in extreme cases it can even bounce and hit the platter after the mogul.

For that reason, the most modern disk drives are now specified for a limit amount of IO. I've seen second-hand references to specifications like "550 TB per year" (which would correspond to roughly a write duty cycle of less than 10%), and rumor has it that exceeding this specification voids the warranty on the drive.

No, I don't know which exact drive vendors / models / capacities this applies to. A web search might help. And as usual: caveat emptor; research this BEFORE buying a disk drive.
 
Back
Top