Mirrored vdevs or RAIDZX

I’ve got 4 1tb 5400rpm drives and 3 256gb ssds- all sata. I can install any 4, maybe 5. I was reading up on zfs raid levels and it’s a little confusing. Do I install FreeBSD on one of the ssds and then create a set of mirrored vdevs out of the four hdds, or a single mirror out of the other 2 ssds, do raidzX, use one of the hdds as cache, or what?


My goal is to get fast remote file access (more read than write) fossil clones, commits, rsyncs and scps, mostly. I am only worried about the zpool and don’t think it likely to have two fails at the same time.
 
I can install any 4, maybe 5.
That's awkward. Is the limitation due to power, SATA ports, or disk slots?

Have you considered using small velcro tabs to fasten the SSDs to the case, so they don't take up a disk slot?

My ideal solution would be to make a ZFS root mirror with two of the 256 GB SSDs. You will want one partition set up as a GEOM mirror for swap on these SSDs (standard install does this). You might create partitions for a special (ZFS mirror) and/or L2ARC (ZFS stripe) on the root SSDs for use by the tank. If your SSDs have power loss protection you might also consider creating a SLOG partition (ZFS mirror) for the tank iff you have synchronous I/O on the tank.

For the tank, you could stripe two mirrors or build a RAIDZ1 stripe. The mirrors will write a lot faster than RAIDZ, but if most of your load is reading, the RAIDZ1 should be fine. The number of disks in the RAIDZ array is not critical, as compression breaks the traditional rule of requiring an odd number of spindles for RAIDZ1 (and even for RAIDZ2). You could go with a 3-spindle RAIDZ1 array if the 4'th disk can't be accommodated.
 
I only have 4 onboard sata connections plus one external-sata port. I'm considering adding a pci-e nvme drive or 2...
 
I bought one of these recently. My experience is quite positive so far. It's running one side of all my ZFS mirrors. However, I have enough SATA ports that I can withstand its failure.
 
I bought one of these recently. My experience is quite positive so far. It's running one side of all my ZFS mirrors. However, I have enough SATA ports that I can withstand its failure.
Nifty -what is unraid? I also saw none-raid...
 
Nifty -what is unraid? I also saw none-raid...
ZFS, FreeNAS, and Unraid are all key words indicating that the controller is flashed in "IT" mode which presents unfettered SATA ports (as opposed to various proprietary on-board RAID controller options).
 
ZFS, FreeNAS, and Unraid are all key words indicating that the controller is flashed in "IT" mode which presents unfettered SATA ports (as opposed to various proprietary on-board RAID controller options).
OK. I ordered one up. Is it best to just let ZFS do the RAID thing and just use this card to supply me with endless (8) sata ports, or do I need to do some configuration on the card?
 
just use mirrors - they are by far the most flexible vdev and offer very decent performance and predictable space allocation and redundancy.
RAIDZ loses a lot of space to padding and also has much less throughput and IOPs in a single-vdev configuration. resilvering can also easily take up to several days with larger spinning disks.
 
Nifty -what is unraid?

Unraid is an interesting idea IMO. It supports 1 or 2 parity drives that add redundancy to drives containing ordinary file systems. Provided that the parity drives are at least as big as the largest data drive, you can use any mixture of sizes. I presume it's possible to reduce the entry cost by just adding a parity drive to an existing set of drives.

The downside is that it's not free and runs on a proprietary Linux distribution.
 
Got my sas controller, went through a ton of troubleshooting, figured out it (Lenovo m92p) wouldn't work with 32 gb ram and the sas card installed, took 8 GB out, now it's running with 24GB - no big whoop (may figure out what's the deal, may not). Went with RAID 10 (stripe across 2 mirrors of 2 drives) to start.
 
If you are using ZFS *DONT* use any RAID underneath it! It will prevent ZFS from properly detecting erroneous data returned from a failing drive. RAID controllers don't give a rats ass about data integrity - they just give you what the first drive returns and ZFS can't detect which drive holds that false block(s).
It will also prevent ZFS from properly aligning blocks to the drives, which may lead to write amplification and increased padding and you will also lose the ability to move the disks around in or between systems because of the proprietary metadata and/or on-disk format by the RAID-controller.

In short: hardware RAID only gives you all the disadvantages of that 90s technology and prevents modern file systems like ZFS from working properly.
 
If you are using ZFS *DONT* use any RAID underneath it! It will prevent ZFS from properly detecting erroneous data returned from a failing drive. RAID controllers don't give a rats ass about data integrity - they just give you what the first drive returns and ZFS can't detect which drive holds that false block(s).
It will also prevent ZFS from properly aligning blocks to the drives, which may lead to write amplification and increased padding and you will also lose the ability to move the disks around in or between systems because of the proprietary metadata and/or on-disk format by the RAID-controller.

In short: hardware RAID only gives you all the disadvantages of that 90s technology and prevents modern file systems like ZFS from working properly.
Got it. Did the raid using ZFS, not the controller.
 
Back
Top