Dual or Single CPU

I am usually the answer guy but I have a pile of parts growing for a ZFS server and I need some guidance.
NUMA or Single Xeon CPU? I have an Chenbro RM23524 which is a 28" deep behemouth.
I am outfitting it with some NVMe and three LSI SAS2 controllers controlling 8 disks each for 24 disks total. 600GB SAS2 Seagate Enterprise drives.

So LGA2011-v3 motherboards.
I am looking at SuperMicro X10SRi -Single socket ATX board or Dual Asrock Rack C612 EATX board for $80 more.
$250 for a single
$330 for a dualie
Needed-Many PCIe slots with maximum flexibility
I have been a long time Supermicro client.
Recently I didn't hesitate to buy a Gigabyte server board and I really want to give Asrock Rack a try.

Advantages of a Dual CPU Board:
80 Lanes of PCIe compared to 40 lanes of single CPU.
Usually also includes 10G or SAS onboard
Insane amount of RAM possible

Disadvantage of Dual CPU Boards:
NUMA is not that efficient.
(CPU benchmarks back this up. dual cpu rigs score 10,000 single CPU yet only 14,000 in dual CPU configurations.)

Advantages of Single CPU Board.
Small form factor possible
With 20+ cores CPU who needs NUMA?
Smaller power supply less noise.

Disadvantages of Single CPU Board
Slots are not full width. Many only half physical lanes. Only 40 Lanes.
Fewer memory slots.
Many bad comments on Newegg for single 2011v3 boards.
Alot of bad DIMM slot complaints. Too many to be user error.
I am seeing this on many of the X10Sxx single socket boards.

Any Comments or Opinions?
I just bought two E5-2608LV3 for cheap. If I only use one it would not break my heart.
I realize dual CPU for ZFS is stupid. I will probably do Virtualization on it for other tasks too.
 
Not answering your question, but if I would buy a powerful server now, I would certainly go to Cavium ThunderX(2).

Yeah, some stuff may not be properly working yet ( but we can always fill bug reports[1] ) but for what I heard around IRC, server software should be mostly already properly working - yet packet.net[2] have dual-socket ThunderX hardware they rent US$0.50 per hour, good for tests. Also, ThunderX ( version 1 ) seem to be free of the Spectre disease.

THIS is a store that sell ThunderX stuff, it is a UK one but I think you can get some idea. THIS is a US store.

EDIT: Gigabyte already have some ThunderX SERVERS.

[1] Netflix is adopting ThunderX and so the bugs should be fixed ASAP, also Juniper uses Cavium OCTEON ( ARMv8 too ).
[2] FreeBSD support is not appearing in there but people from IRC told me some time ago the support is superb.

 
I am pot committed to LGA2011v3 for now. So Supermicro, Gigabyte or ASrock Rack are the prevailing boards.
Plans for 2011v4 upgrade in 2 years or sooner.

What about used. There are some great deals.
Never heard of datto but these must have come out of storage servers.
SAS3 x8 included.
https://www.ebay.com/itm/162942597065
 
I would go with Supermicro X10DAC ( or similar ) - you may eventually find a used one. We never know what kind of quality the BIOS/UEFI have on those Gigabyte and AsRock mobos. Supermicro is usually good.
 
I have been using SuperMicro since before P5-DBE dual including with slockets. So I appreciate the sediment.

I can honestly say Gigabyte never let me down. They switched to solid caps very early and they know how to spec parts.
My complaint with SM is beyond Newegg. I can show you examples on ebay too of dead memory sockets.
Too me this is a sign of bad contract manufacturing. So no matter how well SM designs they have to ensure it represents.

Asrock has left me with junk. But these were $40 dollar boards. I do have a sour taste there.
This actually looks like an Asrock Rack board or same OEM'er
https://www.ebay.com/itm/273230822357
Only 2 PCIe slots. Does have 2 SAS2 and 10G
 
Maybe it is because SuperMicro is the big guy that we see bad memory sockets.
More people using them doing stupid stuff like live memory upgrades!
Gee I wonder why that slot don't work anymore...
 
These are some NVMe I considered. Samsung SM953. They are OEM modules, not retail.
https://www.ebay.com/itm/272899246009

These have really bad scores 1000mb/s read /1000mb/s write
Long MTBF though.

I like my Toshiba XG3 NVMe so much I will probably buy 3 more of the same. I still see some new for $120
 
The SuperMicro dualie isn't but 50 dollars more than the AsRock rack and I see no complaints on the duals.
https://www.ebay.com/itm/191961005077
So maybe it does come down the class users that single CPU draws?
Maybe different contract manufacturer or tolerances. Maybe QA guy on the job.
 
I think SM have more than one quality they follow, like there are "cheap" ones and "expensive" ones. It is just not really clear how to find it out.
 
Yes I think this seems more prevalent on single socket boards.
Mix of cheaper boards and user error.

So my choices are new SuperMicro dual @$382 or used Gigabyte dual @$250
 
Maybe you are wondering, Why would you buy Toshiba NVMe XG3 instead of the newer XG4 or XG5.
It's all in the numbers. The XG3 was faster. The new 3D ram was slower.
So 3 drives will cost me $400. I paid half that for one.
https://www.ebay.com/itm/302767059718
I wonder how bad it is to mix one NVMe drive with perhaps 1000hrs on SMART with the three new zero hour drives.

This is what I plan on doing.
2X bitruficated LP cards for 2X NVMe each, 3x LSI Controllers on hand, for the front sleds, 10GBe cards on hand.

I have some SAS2 expander cards, but I want to go native instead. I just don't see how an SAS expander could tie 24 drives to 8 port card. The funnel only has so big a hole, right?
6 connectors from the backplane down to expander with 2 outgoing SFF8087 ports to controller.
I have Areca, Chenbro and Intel. I was on a spree. Thought it was cool.
Now I am thinking if I upgrade to SSD I will overwhelm the expander. With my 500GB Seagate SAS drives it would be OK.
Ideally I would have bought 2 SAS expanders of the same brand and that I might have considered OK with 2 controllers.
I am going for redundancy. The backplane is divided in half. 2 x12 bay modules. So I have 3 SFF8087 on each backplane.
Somewhat a bastard number as all SAS2 4-port boards are full height(like 9280-16i). I need low profile.
So I will use 3 controllers with 2 SFF8087 connectors each. I don't know about the ZFS pool. 3 separate pools for redundancy?
3 connectors on each backplane is messed up. All controllers use 2 or 4.
I do see a LSI/Broadcom16i card that is SAS3 and Low Profile.
700 bucks. Ouch. Plus i need special cables.
 
Another good argument for filling both CPU sockets is memory bandwidth.

However, with just 24 disks, I don't know how much memory bandwidth you actually need. I don't know what the bottleneck will be. The disk hardware itself will be able to run (absolute peak) at about 4.8 GByte/s. You can easily get enough SAS and PCIe bandwidth to get that onto the bus. If you are running ZFS, you now need enough CPU power to calculate or verify checksums, and (when writing) calculate the encoding (parity etc.) for your RAID codes. With just 4.8 GByte/s, that should be trivial for the CPU. If the machine will be a file or block server, you also need network bandwidth to get this data off the box. Forget about 1gig or 10gig Ethernet; at these speeds you will need 100gig Ether, or Infiniband. Personally, I would always used Infiniband, but I understand that getting IB cards and switches into the whole cluster can be expensive. At this scale, memory bandwidth is not an issue at all; the fastest I've ever seen a single dual-socket Intel box serve data (from disk, through CPU with checksum/RAID calculation, to the network) was 18 GByte/second, so your system should run easily on a single socket.

On the other hand, you very likely don't have enough client machines to actually generate or consume all that data. You might even run applications on the server, and at that point, you might get memory bandwidth constrained again.
 
Back
Top