Considering to buy an old server

I have i7-2600K "server", recently I hit SATA hardware bug (some of SATA can eventually die on motherboards of old revisions). I put LSI controller to overcome this, but I think to replace the entire machine.
I'm aiming to Supermicro X8xxxx MOBO with dual E56xx, they are very cheap, but have everything in term of virtualization. Should I go for 1U or 2U ? Now I have 4x4TB 3.5 disks in RaidZ1 (please, don't insist to use 1+0, I have my reasons) and 2 mirrored SSD for root and caching. 1U for 3.5 disks means 4 disks, leaving my cache devices out of scope (well, I can glue them in empty space for PCI device, but this is mean). I can use PCIe NVME card, but X8 can't boot from it. How good is USB 2.0 for root? I think if will be very slow to boot, but what then ? Probably, they will die fast, but I can put 2 of them. Most X8 motherboards I saw have 2 vertical aligned USB 2.0 slots, I suppose for that reason.

2U server means 8 to 12 3.5 HDDs, more room and, I believe, lower noise but cost is higher for the same speed.
BTW, I even saw something exotic like blade chassis with 4x2 Xeons in 2U form!
What do you suggest?
Thanks.
 
Let me relay me recent experience. I bought a Lanner FW-7582 off ebay. It had a small amount of RAM and a pathetic G850 for CPU. So I bought a Xeon E3-1260L and I had 16GB RAM already. So for 200 bucks I have an descent scoring box and it is exactly what I wanted. A virtulisation box with lots of ethernet ports.

Here are some of my tips:
Develop a system for removing the CPU cleanly.
The problem is those tiny CPU fingers in the socket, I have seen them 'weld' themselves to the cpu pads.
Yanking the CPU can rip the fingers right out of the socket. I have a round illuminated magnifier and trying to repair flimsy contacts that these sockets use, it is almost impossible.
In the past two years I forked up 2 boards myself.

So my recommendation is the following: Find a board with the CPU you want already attached. Do not remove from socket, just clean top and replace the thermal grease or pad.
If you can't find that then consider this: The higher wattage CPU's are the ones that socket fingers weld to.
Especially overclocked or overworked CPU's. Removing the CPU can be the kiss of death for an old board.

That is some overly cautious advice, now I take steps to minimize CPU removal damage.
When removing a CPU, i put my thumb on the top and then unhasp the clasp. Then instead of man-handling it out of the socket I have used 2 different ways. Duct tape loop first, now a small suction-cup rig I made.
So, I don't yank it out even with a jig, but gently pull up a little at a time. Coming out orderly as to not drag the CPU across the sockets finger pins. They are really flimsy.

Installing a CPU into the socket is not as big a deal. Coming out you can do some damage.
Don't drop it in too hard, be gentile. Don't wiggle it. There are notches.

How good is USB 2.0 for root?
Well its not ideal but on my NAS i had only 4 SATA3 so I used a USB DOM that fits a 10 pin internal USB header.
If you go industrial I feel that you don't need to double up.
For example a USB3 module with 3 Million hrs. MTBF for boards with a internal USB3 connector.

I use my NVMe in my compile rig. There it gets the most appreciation. My NAS is a lowly Supermicro X7SPA Atom.
Only you will know the disk density you need.
I used a 2U for my compile server, so I could run some low profile periphery.
So there is the divide. Do you want to run more than one PCIe card. 1U means a riser is needed.
 
The problem is those tiny CPU fingers in the socket, I have seen them 'weld' themselves to the cpu pads.
Interesting! I never thought CPU can stuck in the pad.

For example a USB3 module with 3 Million hrs. MTBF for boards with a internal USB3 connector.
Yep, industrial grade USB exists, but is slow connection affects the overall system speed ? Or after the hypervisor start there are no active interaction with root shared libraries ?

web link to online shops where to buy (no much) old Supermicro servers are appreciated.
Well, I saw Supermicro X8DTE-F 2U with Xeon 5620 (1 processor) for 130 bucks on ebay recently. The idea is that E56xx Xeons are _very_ cheap, but have all modern virtualization capabilities and still supported by intel (status launched).
 
I wanted to mention that I am not sure if the SM x8xxx series boards will take an NVMe. From my reading they were all PCIe 2.x bus.
https://ark.intel.com/products/4792...r-E5620-12M-Cache-2_40-GHz-5_86-GTs-Intel-QPI

Interesting! I never thought CPU can stuck in the pad.
You have to remember some of these CPU use 130W. This is all transfered to the CPU in about one third of the 1356 connectors.
This is just an encounter I have had to deal with on used goods.
Weld might be strong a word. Perhaps Semi-fused is better wording. Some of this effect might just be bad merchandise.
Maybe bad board design with too much power going over some pins versus another.
2 out of perhaps 40 isn't like they all do it, you just have to be cognizant.
 
I understand your argument but I have tried my NVMe in two different boards with PCIe 2.0 slots and it was not recognized at all.
YMMV.
 
I understand your argument but I have tried my NVMe in two different boards with PCIe 2.0 slots and it was not recognized at all.
YMMV.
OK, I can consider adapter with B-key fallback https://www.ebay.com/itm/M-Key-B-Key-M-2-NGFF-SSD-PCI-E-X4-Adapter-Slot-Converter-Card-to-SATA-Adapter/183186951653?_trkparms=aid=555018&algo=PL.SIM&ao=2&asc=44039&meid=cc7acfd873344d7da2e7c58619299c1c&pid=100005&rk=2&rkt=12&sd=152980624447&itm=183186951653&_trksid=p2047675.c100005.m1851

However, I believe this is worthless (4 RAIDZ drives are faster than USB 2.0 attached SSD) and I know that my project don't work without ZIL (database writes are limited by ZIL speed).
I hope I can get PCIe running.
 
From a quick re-read some people have it working on PCIe 2.0 slots.. Like 2 or 3 people maybe.
I tried with a Q77 Kontron board and a QM77 Jetway board. Both with Sandy Bridge CPU's

The adapter cards you show are fine but do realize the top slot does not connect through the PCIe slot but uses the SATA connector on the back edge of the board. The top B' slot does get its power from the PCIe Slot.
Only the bottom NVMe slot gets power and signal through PCIe connector.
 
he adapter cards you show are fine but do realize the top slot does not connect through the PCIe slot but uses the SATA connecto
Yep. I thought I can use it in case of problems with type M NVME or for root partition, but now I think to look for 2 Type M slots and PCIe x8 (if such adapter card exists)
 
Back
Top