Dual CPU is problematic for IO- and network-heavy operations. That's because PCIe lanes are attached to one or the other CPU, and today memory is also attached to one or the other CPU. And in a storage server, you don't have the freedom to pick which disk to read/write from, or which network card to transmit the data over. In theory you have some freedom to pick which memory to place the data in (memory that's attached to the CPU that is nearer the data?). In practice, this is very hard. And in many cases it doesn't even work. For example, if you do a multi-disk RAID write, then most likely half the disks will be attached to the "wrong" CPU. Now the CPUs today have relatively fast inter-CPU buses, but not having to cross that bridge is better than going over a fast bridge.
I'm not saying to give up on a dual-processor motherboard when you need it; but don't expect scaleout from it to be anywhere near linear.
For ultra-high-end servers, NVMe makes the problem considerably more difficult, as it is so fast. It has the potential to move the bottleneck elsewhere, no longer the disk itself.
And cooling disks is important. Even more important is keeping their temperature relative constant; they don't like temperature fluctuations. Vibrations really hurt disks, so buying good (vibration-isolated) disk enclosures is important too.