ZFS Suggestions for 6 drive NVMe array

OK I bought 6 brand new Samsung PM983a 1.88TB M.2 NVMe with identical firmware.
I am running them in three Dell cards dual M.2 with fan.

My Hardware:
X10SRH with 64G ECC, E5-2683v4

My Question:
I have SATADOM setup for testing this new hardware. How would I best use a SATA DOM for a ZFS Array?
I am thinking of just installing EFI partion on the SATADOM and have the NVMe array host root drive.

Second Question:
What would be ideal for a speed setup? Three mirrored pairs or pair of 3 drive zdev?

Thanks
 
I found that the SuperMicro X10SRH only offers bifurcation on two slots. I need three.
So I lateraled to X10SRI. It offers 3 bifurcation slots. That will allow me 6 NVMe in the Dell cards.

I think I am going to start with three mirrors of two drives each in my pool.
So that should give me ~1.88gb X3 storage and I can lose 3 drives. Should be fastest.

Drives are in 4K mode already.

ashift=13????

Weird side question:
Any value to using two SATADOM for EFI? Total Overkill???
They will only hold EFI and you can set that RO.
SuperMicro has two Orange SATA Sockets for some reason? (DOM-Power on Pin7)
 
I am marking this thread solved. After watching Allan Jude I realize ssd array is not a good idea. ZFS is for disks.
Newbie to ZFS and I need to KISS.
Feel Free to comment.

I am going to pare it back to one ZFS mirror pair and SATADOM for EFI.
 
Thanks for the zfs+nvme talk link. The thing I am realizing is that may be we need a nvme native fs from scratch as zfs has become sort of a kitchen sink. May be even the whole storage subsystem needs to be revamped. For example, why even prefetch data if the latency is so low.
 
as zfs has become sort of a kitchen sink.
Like you say you need to start from scratch almost.. Talk to FTL directly instead of so many translations..

It was interesting to look at IOP's.

Allan is a great presenter.
 
I wanted to mention that the Dell Dual NVMe card with fan quoted above is really nice.

I have many of the SuperMicro AOC-SLG3-2M2 cards with Dual NVMe. No Fan.

The differences I was seeing was 30 degrees cooler with Fan at idle.

Need to put heatsinks on NVMe controllers on fanless card and compare. I was seeing 80C++ underload.
With fan max 60C using smartmontools with PM983a.

I plan on switching out the SuperMicro cards for the Dell ones. It was that much a difference.
I imagine thermal throttling was reducing throughput as well.
 
I imagine thermal throttling was reducing throughput as well.
I have been on my own journey there, and concluded that NVMe thermal control is still in the era of the "wild west" -- no sheriff plus lots of snake oil and outlaws.

Edit: The be quiet! MC1 Pro M.2 SSD Cooler was the best low profile passive M.2 2280 heat sink I could find. If you have the headroom, then the SABRENT M.2 2280 SSD Rocket Heatsink is hard to beat. I didn't investigate the active (integrated fan) heatsinks -- mainly because I don't trust the fans in a $20 gadget...
 
Funny I just got off a buying spree.

I wanted a 16x Card for four NVMe. I already have a cheap one coming (fanless) but four 2280 modules.
I needed 22110 for enterprise drives.

Then I found out about the heat issue and found a really cheap one that I will build an airduct for.

Back when I started on M.2 NVMe I had bought heatsink slabs. I backed off when I moved to enterprise drives.

On this attempt I tried to add a small heatsink from my kit to the NVMe controller chip but the wind tunnel is to shallow and hits.

I almost bought a Asus M.2 Quad Hyperdrive.
I figured I would back off. I have few true 16x lanes but lots of x8 lanes.
 
mainly because I don't trust the fans in a $20 gadget.
I understand that, but still bought the 30 dollar quad one. I am interested in these small side fans it uses.

My Chelsio's needs some local cooling on a small build. Side exhaust fan might be better.
The 50mm x 15mm 4 pin I am using now eats a PCIe slot space like a video card does..
 
I don't understand how an NVMe M.2 heatsink is supposed to work anyway.

Every chip is a different height. So I tried thermal pad kit with 0.5, 1, 1.5, and 2mm thickness.
Feels kinda dumb. PCAP chips get covered too?

I like the windtunnel approach.

The HP Z Turbo drive Quad is interesting. It uses air and heatsink.

The Asus M.2 Hyper is alot cheaper.
 
I did throw my line out on a "parts" auction.

I am willing to bet this seller has no idea how bifurcation works and it says:
Only slots 3 and 4 will mount drives.”

i am betting 30 dollars max with shipping this is user error. If nobody else bids only $20 bucks.
 
I have a stubby VROC riser incoming. Unfortunately my x11SCL-IF may not support bifurcation.
I have Mini-ITX NAS chassis I am rehabing. So Single PCIe slot.
This would be ideal to maximize my 16x lanes with Chelsio Low Profile card stacked on top.


VROC seems to be some market speak for BIOS NVMe RAID features. I always hated gRAID/softraid like that....
The adapter above is labeled VROC where i got it from. I dunno.

Maybe my SuperMicro C246 BIOS will have VROC and I can get 10GB/s.... I doubt SM uses that in a Cxxx bios.
 
Thanks Vlad. Interesting read.

I was expecting a pure BIOS solution but appears that VROC is a CPU supported feature.

I don't own any of them CPUz yet..... "Limited on Xeon E" That is my only supported hope.
The Cheapest E21xx CPU is too expensive now. ~>$300
The whole class CPU Xeon E21xx and E22xx have terrible TDP. 71W lowest on chart....

Well maybe next upgrade cycle I will try VROC....
 
Back
Top