Solved Choose a CPU for ZFS as SAN

Hi All,
I want Choose a CPU for ZFS as SAN,
can you help me for Intel CPU :
1. choose v3 or v4?

2. between v3 with higher Clock and v4 with lower Clock, which you choose?

very thanks for your answers,

The Best Regards,
 
Well I am going to make a wild guess here and assume you are talking about LGA2011 Xeon V3 or V4 CPU's.
These both fit into the same socket, But most motherboards only shipped with BIOS for V3 CPU.
So you must install V3 CPU to update BIOS then you can install your V4 CPU's.

To answer your question the V4 is newer and better right? V4 is the 'Toc' of Intel's Tick-Toc strategy.
Version 4 usually offers more cores than comparable Version 3 models like 2650L.V3 has 12 cores and V4 has 14.
Haswell is V3 and Broadwell is V4. Broadwell was just a refresh.

Same situation as with LGA2011V1 and V2. V1 is TIC and V2 is TOC. SandyBridge and IvyBridge.
 
thanks,
i want know which is better for use in ZFS as SAN? a v3 with more clock either a v4 with less clock?
 
Have you checked the benchmarks at Passmark? You might find not much difference. That is a definitive hardware view.
But also realize thermal envelopes are involved. You can get a 2650 and 2650L and the 2650 is faster but way higher TDP.
So does power efficiency play in at all with your question. If simply a hotrod then fastest CPU would be best.
But I would rather leave it for the ZFS experts to chime in. I was simply giving some LGA2011 experiences.
My personal philosophy is to use the best-lowest power processor. Especially for 24/7 ops.
Passmark is your friend. Version 3 versus Version 4 comparison. See the difference is very little.
https://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+E5-2650L+v4+@+1.70GHz&id=3054
https://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+E5-2650L+v3+@+1.80GHz&id=2588
I like to divide the score with the TDP to give me an idea of efficiency.
https://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+E5-2650+v4+@+2.20GHz&id=2797
Now look at the regular 2650 V4. Slightly faster but look at the TDP.
 
What are your "speeds and feeds"? How fast do you need?

If you have a single disk drive, and a 100-base-T Ethernet, the answer will be one thing. If you have 360 disk drives and 24 SSDs attached, and two 100gig networks plus a few infiniband cards, the answer will be different. I've seen storage servers for both situations.

In my experience of building large storage servers (although never with ZFS), the real problem is not CPU power (integer and vector operations), but typically IO bandwidth, either on the storage stack (CPU -> PCI bus -> disk interfaces like SAS expanders -> disks), or on the network stack (which is why I recommend Infiniband rather than Ethernet). When going for high performance, max that out first, until you're limited by the PCIe lanes. Consider processors like AMD or IBM that have more PCIe bandwidth. On the CPU end the bottleneck is probably going to be memory interfaces (from CPU to RAM) rather than CPU operations. I would look for the server-class processors with the most and fastest memory buses.
 
Plus knowing what your SAN is serving up would be useful. An all NFS appliance or mixed with iSCSI, Samba and NFS could have different hardware needs. Then consider the numbers of users on each of those services.
Memory sizing is probably more important than CPU speed. V3 or V4 Xeon is well within the scope for most SAN needs.
A good question is single 2011 versus dual 2011. Dual also brings you more PCIe lanes as ralphbsz mentions the importance of.

If you still have a CPU question post your actual CPU choices you have and I can better judge.
As you can tell I am a fan of the 26xxL chips. Low power means less heat. Heat is a killer. Especially in a large disk array.
Disks themselves put off alot of heat.

I am not an expert on storage but I can see the increased speeds of NVMe as a contender here if your building a serious appliance.
Many Supermicro, Dells and HP offer 2.5" bays for 4X NVMe drives and the rest conventional slots.
 
Dual CPU is problematic for IO- and network-heavy operations. That's because PCIe lanes are attached to one or the other CPU, and today memory is also attached to one or the other CPU. And in a storage server, you don't have the freedom to pick which disk to read/write from, or which network card to transmit the data over. In theory you have some freedom to pick which memory to place the data in (memory that's attached to the CPU that is nearer the data?). In practice, this is very hard. And in many cases it doesn't even work. For example, if you do a multi-disk RAID write, then most likely half the disks will be attached to the "wrong" CPU. Now the CPUs today have relatively fast inter-CPU buses, but not having to cross that bridge is better than going over a fast bridge.

I'm not saying to give up on a dual-processor motherboard when you need it; but don't expect scaleout from it to be anywhere near linear.

For ultra-high-end servers, NVMe makes the problem considerably more difficult, as it is so fast. It has the potential to move the bottleneck elsewhere, no longer the disk itself.

And cooling disks is important. Even more important is keeping their temperature relative constant; they don't like temperature fluctuations. Vibrations really hurt disks, so buying good (vibration-isolated) disk enclosures is important too.
 
For a SAN, you'd be better off going with an AMD Epyc CPU. Even the lowliest 8-core Epyc CPU includes support for 128 PCIe lanes, which is almost 3 times what you'd get from a Xeon system. More PCIe lanes means more storage can be attached. Or more network/infiniband/etc controllers. And there are several Epyc motherboards that include multiple 10 Gbps Ethernet ports.
 
Back
Top