ZFS ZFS in a RAID70 configuration

Building speedy ZFS box.
I have been looking around at various ways to format a group of 24 drives and I found the Calomet site has a nice chart.
ZFS Raid Speed Capacity
https://calomel.org/zfs_raid_speed_capacity.html
The author gives some great benchmarking numbers to give me some rough ideas.

So questions: Why does the raid0 x24 disk array lose so much? 24 disks at 204Megabytes/sec each=4896Megabytes/per second.
What gives. He is showing 1377MB/s
Is overhead really that high? I see him mention one disk controller and a SAS expander.
So maybe he is maxing out the interfaces?

I have a 24 bay chassis.
Are his suggestions for raidz3 correct. I am looking at RAID60 or RAID70
2 striped 12x raidz3
 
4896MB/s would need a 40Gbps card working at theoretical maximum ( assuming a single card ). By the Calomel posts it seems he uses 10Gbps cards.
 
I think your right. The interface is maxing out.
He is using an PCIe3.0 x16 slot, but the HBA is: Avago Technologies (LSI) SAS2308 9207-8i
So he is trying to force 24i down an 8i board.
I wonder about that SuperMicro expander too.
This is exactly why I want to use 3 separate SAS2 controllers.
This whole chart is useless with a single 8i interface card.
I bet RAID70 is much quicker as are others..
 
I think these numbers tell it all:
24x 4TB, raidz (raid5), 86.4 TB, w=567MB/s , rw=198MB/s , r=1304MB/s
24x 4TB, striped raid0, 90.4 TB, w=692MB/s , rw=260MB/s , r=1377MB/s

In what world would a software RAID5 array hold so close to a RAID0 array.

So a theoretical question: Can I create a 24 disk ZFS RAID0 array over 3 different LSI disk controllers.
Not that I would... That is why I want a super quick front end device using NVMe.

And by extension can I create 2 each 12 disk RAIDZ3 arrays over 3 controller cards?
All flashed to IT Mode with identical firmware.
 
Yes, you can spread vdevs across controllers. Why would you think you couldn't?

In fact, for proper redundancy, you need to spread your vdevs across multiple controllers if you want to be able to use the pool when a controller dies. So long as the OS sees the drive, you can use it for ZFS, regardless of how it's connected to the system.

Our 24 bay chassis use direct attach or multi-lane (no expanders) backplanes and 3x controllers (8-port). These are 6 gbps SATA3 controllers. These are bottlenecked by the gigabit NIC. Pool is configured with 6-drive raidz2 vdevs.

Our 90 bay systems use a single external 4-lane SAS connector per backplane (2 connectors per controller, 2 backplanes per chassis, 2 chassis per system). So 2 controllers, connecting to 4 backplanes, using expanders. These are 6 gbps SAS controllers. Pool is configured with 6-drive raidz2 vdevs. Bottleneck is the gigabit NIC, but resilvering shows 3-5 gbps across the pool.

These are all storage systems for our backups setup. We're more interested in storage space than throughout. So long as as our rsyncs compete before 6am each morning, then it's fast enough. :)

I screwed up the hardware list for our first EPYC-based storage server. Instead of being able to use the 4 multi-lane connectors on the motherboard (they were slimSATA not SFF8087) to connect to a direct-attach multi-lane backplane, I ended up with an SAS3 expander using U.2 (or something along those lines) connectors. Had to get a 12 gbps SAS controller to connect to it. :( Still plenty fast, but not nearly what it could have been.
 
I went with a used SuperMicro X10DRI for a $250 offer.
https://www.ebay.com/itm/273411355477

How should I populate the RAM for ZFS. 16 Slots total.
Thinking 8x8gb. Quad channel so minimum 4 modules. I will only use half the slots for now.
Planning 24x 500 or 600GB SAS2 drives RAIDZ3 x2
 
New Samsung 16GB ECC-DDR4 modules were $130 each x 4= $520
Ouch, more than motherboard and CPU's...
 
I am looking at SAS2 drives in 2.5" and I am zeroing in on Savio 10k.6 or 10K.7 NewOldStock.(2015'ish)
The other side of me is looking at Toshiba SAS drives.

Any horror stories on either?

How about spares. What is rule of thumb for a 24bay array? 4-6 drives?
 
RAM prices are insanely high now. Sad fact of life.
Yet the bottom has fallen out on used DDR3 ECC. I bought a boatload and use them in regular machines.
Surprising how many machines will work with ECC modules installed without using the ECC
The used 16GB ECC modules are cheaper than regular16GB..My industrial boards don't seem to mind them.
 
Any horror stories on either?
I've never met a Seagate drive model I didn't end up hating, from a quality control point of view. On the plus side: their performance seems to be better at the same generation than their competitors. And for consumer-grade drives, their prices are significantly lower. But my time (wasted on having to rebuild RAID volumes, buy new disks, fix problems) is much more valuable then the premium one pays for Hitachi drives.

Haven't used very many Toshiba drives (dozens and dozens, not tens of thousands, and none of them ever at home). So I choose to not have opinions on them.
 
I ended up getting 2 more XG3 NVMe's 512GB and I bought 2 Supermicro adapters for dual NVMe XG3's..
http://www.supermicro.com/products/accessories/addon/AOC-SLG3-2M2.cfm
Unfortunately my x10dri board is not on the validated list.

I am real curious to see how make -j 24 feels.
-j 8 is the most I have used.

Planning on some stupid antics before getting serious. RAID0 the NVMe's for benchmarking before setting up for a mirror.
Maybe RAID1+0 with 4 modules.
Stage 2 is add 2 more NVMe and maybe another quad bank of ram.
 
Next though a power supply. I have something (old Ablecom) to get me going but I need a real deal 2U.
Debating on one big honker(1KW) or maybe a Zippy 600W pair.

The tweaker in me wants to add a seperate power supply for the backplane,
I have a 5V/12V power board and may put it where the expander goes.
Might have to add a AT style switch somewhere for the Backplane power supply..
My Chenbro uses 6 each power connectors -4 pin molex- for just the backplane. That seems worthy of separate power source.
Is this a bad idea?

Then with 2 each 52Watt CPU's I could probably use 300-400W Zippy-Emacs redundant pair.
I guess I really need 2 power supplies for the 2 separate 12 bay backplanes to keep it redundant..
 
Surprising how many machines will work with ECC modules installed without using the ECC.
Yet to see one that doesn't. And when it doesn't, it's probably something else, not ECC itself being the reason.
Have used my handful of modules from s1366 Xeon system in half dozen non-ECC machines (AM1, AM3, s1150). Sold some second hand and had no complaints either.
 
I am looking at SAS2 drives in 2.5" and I am zeroing in on Savio 10k.6 or 10K.7 NewOldStock.(2015'ish)
The other side of me is looking at Toshiba SAS drives.

Any horror stories on either?

How about spares. What is rule of thumb for a 24bay array? 4-6 drives?

I haven't used SAS drives yet, but we've had very good experiences using Toshiba SATA drives (2 TB and 4 TB, 512b sectors, 512e, and 4Kn sectors). They've lasted longer than similar drives from Western Digital and Seagate, and the return process for the odd one that died was vastly superior.

As for spares, it all depends on how quickly you can be notified and get to the server to manually swap disks. If your turnaround time for that is an hour or three, then you don't really need any online spares. If it's take you a day to drive to the location, then you'll want to have spares running in the system. :) All of our ZFS systems are in the same town, within a 10 minute commute, so we don't have spares in the systems. 24-bay chassis running with 4x 6-drive raidz2. 45-bay chassis running with 7x 6-drive raidz2 with 3 extra drives (when we add a second 45-bay chassis to it, then we get an extra 6-drive raidz2 vdev).

With raidz3, your "allowable time running degraded" increases a bit, compared to raidz2. It all depends on how long you're comfortable with running a degraded pool.
 
Back
Top