Reliable file server

Hello,

I need to replace an at least 10 year old server which has been very reliable: Supermicro 745BTQ-R920, Opteron 4334, 64 GB ECC RAM, RAIDZ2.

Foremost requirement is reliability. The intended use is file server for about 30 users; possibly later up to 60. File sizes vary widely, up to several GB. Total used space now is about 2 TB.

I'm looking to get something something similar built, with some more up to date parts, but keeping what has been proven reliable, along these lines:

- Supermicro 745BTQ-R920 case
- Supermicro MBDH12SSLCO
- AMD EPYC Rome 7302P
- Noctua NH-U9 TR4-SP3 CPU cooler
- Kingston KSM32RS4/16HDR x4 = 64 GB RAM
- WD WD8002FZWX or Seagate ST8000VN004, 3 or 4 each for RAIDZ2

Does this look reasonable in 2023? Your advice is welcome.
 
I see nothing wrong with your parts list.
What concerns me is re-using 10 year old power supplies.
Sure they are reliable but they have been sucking every bit of crap out of the air.
I suggest you blow the inside down the chassis with compressed air. Straight thru the power supplies too.
 
Cooler might not fit. Supermicro boards have their own base plates. It's attached to the socket, so you can't replace it. You probably need to get a CPU cooler from Supermicro. They have different options, both passive and active coolers. Their website should have a matrix that tells you which coolers fit on which boards.
 
Also you are using the same amount of RAM as the machine you got 10 years ago.
Up that jank to 256GB ECC for 30-60 people.
 
I think your storage system choice is weak too for 30 people.
Ditch the spinners.
Put mirrored AIC PM1733 as Slog and 8 mid-enterprise NVMe as backing or maybe 9 in three zdevs.
 
Also in regards to the power supplies. I am all for re-use.
But 10 years is a good amount of time. Look into getting a spare, even if used.
I am assuming the rig has redundant power supplies.
 
Oh yea and what about cabling. Make sure the power cables will work for your board. EATX and all that. But you also have dual 8x power connectors on some SM boards.
And what about Drive Bays and i2c cabling. How is cabling there....
<AHCI SGPIO Enclosure 2.00 0001>
 
One last concern before lunch.
Front Panel Connector. How do the connectors look there?

I have had to do some pretty sorry hacksaw jobs to get stuff working....

Supermicro has great docs. Compare FP headers on the two boards. Old and new.
 
Thank you all, gentlemen.

- Power supplies: I was probably not clear; I'm getting a new case with the power supplies; the old box needs to keep running. I also have a couple of spares on hand.

- RAM: I hardly ever see swapinfo other than 0%, FWIW...

- HD vs. SSD: for 8 drives, each 8 TB it's about $1,500 vs. at least $4,500, so, I'm afraid it will have to be the spinning kind.

- Thank you for the reminder about the CPU cooler, I'll check.
 
If you don't have problems with RAM or disk speed now you won't have any with that new machine.

Performance will be similar, dominated by the magnetic disks. Newer disks than 10 years old will bring some speed.

Whether more RAM helps performance depends on usage patterns. Lack of used swapspace won't be much of an indicator since ZFS only uses what is available. But DDR4 ECC registered is dirt cheap on Ebay right now. I just loaded up quite a bit.
 
Supermicro X10SLL-F
If I look at a picture of that board, it doesn't have a base plate. The four holes where the cooler should be are free on this particular board.

The H12SSL-C doesn't have them. The base plate for the cooler is integrated and part of the CPU socket.
 
Fine machine.

No idea about those harddrives. I still use Toshibas.
That would also be my advice, to use Toshiba instead of WD or Seagate.

Toshiba hard drives have always had less problems and data loss than WD and Seagate hard drives in the last 5 years.
 
The Supermicro stuff looks great.

We don't have much information about the behaviour of the work load, but here's a few ideas to consider.

You have sufficient drive bays to use striped mirrors which will have markedly superior write performance to RAIDZ2. The redundancy equation is a little different, but only slightly less resilient.

The motherboard supports two PCI-E 4.0 x4 M.2 expansion slots. So you have the option to use M.2 SSDs, optionally with power loss protection (PLP), for a zroot mirror.

PCI-E 4.0 x4 M.2 SSDs are orders of magnitude faster than spinning disks and open up the options for:
  • separation of the boot media and operating system from the application data pools;
  • a separate ZFS Intent Log (ZIL);
  • a special VDEV to hold metadata (and optionally small files) for the main pool; and
  • an L2ARC.
You might not want or even benefit positively all these things, and some require PLP. But they are worth considering.
 
M.2 drives with PLP seem to be rare, though. I am only aware of common availability of the Micron 7450, and then the 2 TB drive is 22110 length.

I'd rather keep hotswap intact with the SATA drives.
 
M.2 drives with PLP seem to be rare, though. I am only aware of common availability of the Micron 7450, and then the 2 TB drive is 22110 length.
All the enterprise M.2 drives are 110mm from what i have seen.
I have Samsung PM963 paired on SM paddle board and PM983 paired on paddleboard both offering PLP.

Personally I would not buy another M.2 drive.
Only U.2 and now U.3 drives in PCIe 4.0 format. 2.5" drives are more convenient for me.
The switch to U.3 sucks for me as my NVMe drive bays were not cheap.

I would love to try dual port connections on some of the newer PM1733 u.3 drives...
 
The U.2 stuff is attractive if you consider used drives. The 3.84 GB drives go cheap-ish on Ebay because most people can't use them. "Controllers" are almost free new. I just wish the hotswap frames weren't so expensive.
 
Back
Top