SSDs in ZFS mirror, and a RAID-Z2 using Seagate ST4000DM000 drives?

I will be building a FreeBSD server for home, and would like some advice on drive configurations. Reliability is my main concern.

My thinking is to have 2 pools, one for frequently accessed storage (OS, email, logs, etc), and one for infrequently accessed storage (videos, photos, manuals, bank records, etc). I propose this because my frequently accessed storage needs are small, while my infrequently accessed storage needs are large in size.

For the OS and frequently accessed files I was planning on a ZFS mirror of two 120GB SSDs (model not chosen yet, so any advice here is appreciated). I chose mirrored SSDs with the thought that they would be low power, reliable, and fast to resilver if one dies (therefore less chance of losing the remaining SSD during the resilver process as the resilver time is short). I suppose I could use HDDs here instead but I am worried about resilver times and the chance of losing the remaining HDD during resilvering (my old HDDs have been very reliable running 24/7 for 8 years, but my newer HDDs have had failures within a year or two so I am a bit leery of current HDD reliability).

For the infrequently accessed storage I was planning on using an 8 disk ZFS RAID-Z2 pool using 4TB Seagate ST4000DM000 hard drives (I will start with 5 disks and add disks as required later).

I am hoping this setup will maximize the longevity of the HDDs and have quite low power consumption, because using SSDs for frequently accessed files should mean I do not have to keep the 8 HDDs spinning all day long when the majority of the data on them will likely only be accessed a few times a day (so they should be able to idle or be spun down much of the time).

The 4TB Seagate ST4000DM000 hard drives do not have aggressive head parking firmware like some "green" drives do, and they also have a short error correction time out when trying to recover from a read or write error, so should be fine for a ZFS RAID-Z2 as far I can tell.

The only reason I did not just choose to use the single 8 disk ZFS RAID-Z2 pool for all the storage needs was purely one of power consumption and heat (having 8 HDDs spinning and all accessing every time anyone accesses anything on the server no matter how small, sounds like a waste). If anyone sees any errors in this thinking please let me know.
 
I don't really see any problem with using an SSD mirror for the main OS. Sun boxes were always configured (and Oracle servers probably are) with a simple root mirror for various reasons. It may also be an idea to partition 60GB (or possibly even less) on the SSDs for your root and add the leftover space as L2ARC on the storage pool.

My only note would be on this:

(I will start with 5 disks and add disks as required later).

If you create a 5 disk RAIDZ2, you will not be able to add more disks to it without destroying the pool and recreating it. The only way to expand a zpool is by adding more vdevs (i.e. another RAIDZ2 group).

Some people may point out that an 8 disk RAIDZ2 is not quite optimal but I doubt performance is your primary concern for the storage pool.
 
Thank you for for pointing out my error. I thought I had read you can expand a pool easily by adding HDDs, but now that I dig deeper I see it can only be done by adding VDevs and not HDDs themselves. I guess I will be buying all 8 drives at the outset, as adding another RAIDZ2 VDev will waste the cost of another 2 HDDs for it's redundancy. Thank you for the help.

I had read that RAIDZ2 should use 2^n + 2 drives (4, 6, 10, HDDs etc) for good performance, but the FreBSD Handbook also said to avoid a pool of 10 disks or more, and since I did not need great performance, I chose 8 HDDs. I could not find any information on the performance differences between 6 vs 8 vs 10 HDDs, so I chose 8 expecting it to be the best compromise between performance and cost. If you know that 8 HDDs will bring the server to a crawl, or if you know of any other reason 8 HDDs will be a problem please let me know.
 
I would not worry about the longevity of HDD's spinning all the time. In fact, in a server, I would rather let my drives just spin all the time instead of having its moving parts and temperature fluctuate, even if that only happens infrequently. Power consumption can be a good argument though, personally I do not have much experience on how FreeBSD/ZFS tries to use drives that are not being used actively by the user... if you are unlucky, they could be spun up/down often by background processes anyway... Nice thing to test :)

A RAIDZ2 with 8 drives is still fine, however with 4 TB drives I would consider that the maximum safe amount because it takes longer to rebuild a drive, which makes the risk window for a possible 3-drive failure larger.
Performance-wise it should also be OK. However, RAIDZ2 in general is not known for its great random access performance, so I would definitely use at least part of the SSD drives as L2ARC cache devices. How much L2ARC you need depends on your working set, e.g. the amount of randomly accessed data you actually use frequently. Probably takes some experimentation with cache hit/miss metrics.

Also, make sure you create the pool with ashift=12 on those 4K sector drives. With those Windows XP-512b emulation crap, zpool tends to create ashift=9 pools and that will kill performance. Use the gnop trick if needed.


Personally, I tend to have everything on one storage pool for easier administration, and use SSD's only for hosting the ZFS bootcode and any L2ARC and/or ZIL functionality (if you intend to use SSD's for ZIL, you want SLC drives so you don't ear them out too fast).
 
For SSDs, I would go with Plextor M5P or Samsung 840 Pro. Both use low-power toggle mode memory and non-Sandforce controllers. Not sure what ZFS performance on an SSD mirror will be like. Post benchmarks if you can.
 
QuietCanuck said:
I could not find any information on the performance differences between 6 vs 8 vs 10 HDDs, so I chose 8 expecting it to be the best compromise between performance and cost. If you know that eight HDDs will bring the server to a crawl, or if you know of any other reason eight HDDs will be a problem please let me know.

It's mostly to do with random I/O - due to the way ZFS turns all writes to a VDEV into full-stripe writes (and READS, I believe), this means that every write to a VDEV is striped across all disks in a VDEV.

ZFS will then stripe (RAID0 style, the fault tolerance is handled at the VDEV level) across multiple VDEVs in a ZPOOL.

There are two side effects of this:
  • due to also being copy-on-write, a stripe never needs to be read/modified/written
  • in terms of the number of operations per second your drives can do, as every drive needs to be written to for every write, your maximum number of IOPs for a VDEV is limited to the speed of a single disk.

So... with eight drives in a RAIDZ2 you are looking at a maximum of say 100 random I/Os per second (or however many a single disk of yours can do).

To get more I/Os (number of transactions, NOT the same thing as raw streaming throughput) you need more VDEVs.

So in your case, to do this, you'd be perhaps better off performance-wise going for a ZPOOL with 2x 4 drive RAIDZ2 VDEVs (for 200 IOPs, based on the same drives as the above example). It will of course mean higher overhead in terms of disk capacity sacrificed. Another alternative being 3x 3 drive RAIDZs, with less fault tolerance.

If you're using the pool for bulk non-performance critical or streaming storage a single RAIDZ2 VDEV is probably fine. But if you're doing something that needs to do a large number of smaller transactions (e.g., database) then a single RAIDZ2 will suck performance wise.

If you are concerned with being able to easily expand the pool by adding disks (rather than replacing every drive in a VDEV), using mirror VDEVs may be better.

Yes, you throw away 50% of the capacity, but you can add disks in pairs, as you only need two drives to make a mirror VDEV. You'll also get better performance, in terms of IOPs.
 
Back
Top