Repurposing a Supermicro server board as software development workstation

Do I need a different backplane? Or do I "just" need to remove the backplane?
Yes that backplane only has SATA and SAS drive support.

Can you shoehorn in another backplane? Maybe but there are other options here.

The X13 motherboard associated with this chassis has two M.2 slots.

So you must consider using them. The other option is PCIe memory cards called AIC. Samsung PM1735 is an example of one.
So for four NVMe on this box use two M.2 and two AIC. One with full height bracket and the other a LP bracket.
 
Maybe but there are other options here.
Thank you; I’ve definitely been using M.2 but I’m interested in learning about U.2 and how one is actually meant to install it in a multiple drive scenario.

Maybe I’m just looking at the wrong class of machine? Is it meant to be more of a 2U server setup?

It must be simple so what am I missing?
 
It must be so simple but I'm not seeing it
No you are looking at cutting edge. Remember Intel has been dragging on PCIe 4 bus so these are some of the first offerings.

If you already own this hardware I would make a internal drive cage to hold some drives where the full height card rides.
I would be measuring that empty space to see if 2.5" drive width fits. You might need a power splitter for drive power.
Depends on what the power supply offers really. 1U are limited.
Do you have anything in the full height slot?

Would be interested to see what this means "1 Dedicated Internal HBA slot"
Maybe a mezzanine card I dunno. With 1U you need consider filling eveything offered. Pleanty of PCIe lanes, so few slots.
 
OK after further review without slapping your own cage in the box why not consider this:

Use the Dedicated Internal HBA Slot for a SAS controller and use high end drives in your 8 bays.

I got some Seagate Nytro SAS3 that do 1200MB/sec writes . That is respectable and faster than some older NVMe.

If the Internal HBA card is too much money buy a Broadcom card for one of the slots. Check your cabling to backplane good first.

One of the AOC cards they recommend is just PCIe 4 LSI card but once again $425 from wiredzone. That seems high.
 
Do I need a different backplane? Or do I "just" need to remove the backplane?

The parts list says "Backplane BPN-SAS4-LB13A-N2 1U 8-Slot 2.5".

I wonder why the part number appears to be a SAS4 one but the chassis listing says SAS3.

It was a lot easier in the old part number system to replace a backplane. The backplane part number was based on the chassis.

For example CSE-835 Supermicro chassis has backplane BPN-SAS-835-TQ

The important part is suffix. They denote what it does. TQ means AMI backplane managment.

When you see an "N" suffix it means NVMe.

SuperMicro has several different 1U chassis, and thusly different backplanes. But there is NVMe options.

If your serious about transplanting I would start studying other 1U backplanes. See what might be possible.
Bolt hole pattern is first. Most SM backplanes hinge off two or three notches on bottom of backplane that match up with chassis
You mentioned removing backplane and going direct. That is not unreasonable if they don't sell a backplane that offers NVMe.
Problem with that is you lose all drives attached to it because the backplane is one piece. So cabling becomes messy fast.

How about this for spitballing. You have a PCIe 5.0 x16 slot. Get superfast AIC for it.

Otherwise two x16 PCIe 4.0 cards to feed 8 NVMe in bays without backplane....
Wiring nightmare with power. You need a backplane on this.
 
Looking at the backplane part number you have 2 NVMe bays out of eight. Do you have any different colored drive trays?
 
The parts list says "Backplane BPN-SAS4-LB13A-N2 1U 8-Slot 2.5".
Yes and all the literature says SATA/SAS slots but I know the the N2 suffix means two NVMe supported. So mabye its not cabled for it.

The chassis here is CSE-113
I can't find a manual for the SAS4 backplane but the SAS3 Version has slot 6 and slot 7 as SATA/SAS/NVMe

So you need to look at backplane to see what connectors are there.

On the SAS3 version the two drive bays under the optical drive are the special slots. SATA/SAS/NVMe
 
Looking at the backplane part number you have 2 NVMe bays out of eight. Do you have any different colored drive trays?
Thank you for your ideas; I’m away from this machine for a while so can’t check anything on it. But no, no special drive trays and nothing obvious for NVMe but I need to have a proper look.

I got an “NVMe enablement kit” that has two drive carriers with orange tabs and NVMe labels but not sure what goes where.

I thought there would be some Supermicro documentation on how to set up your XXX machine with U.2 but not seeing any such thing.
 
To me it is really strange the page for the chassis does not mention NVMe bays at all.
The only hint is the part number for backplane.
Hopefully it is wired and ready to use. With an NVMe enablement kit included it has to be.
I would bet the same bays used as previous version. Slot 6 and Slot 7 of backplane.
 
In that case you need to check the backplane. Study connectors.

From the rudimentary pictures I see an SFF-8654 connector on motherboard with a cable coming from the front.

That could be used for either NVMe or SAS. I doubt a combo cable.

So look around the back side of slot 6 and slot 7 of the backplane. See what connectors are present and what is empty.

Physically verify part number against manual/webpage. Look for unsoldered pads where connectors could go.

Its possible this is a value machine and has items left off backplane. I have no actual insight but it is possible.
 
For video cards - if you dont need anything fancy, no 120Hz-144Hz -4k - i would get something like nvidia quadro p600 / p620 - runs directly from PCIE but if you thinking to run something more intensive and try local LLM`s - 3090 with a bit of undervolting to make less power hungry and runs cooler ( turns out its a sweet spot for LLM`s due to 24GB VRAM and you can get them now for around 500-700$ in the states) as still you will be running some VM`s with gpu pass-trough or you can get cheap old quadros with up to 24 VRAM but it will be slow but most of them will need cooling and most of them are designed for server racks but you can find plenty of 3D files to print out fan holders.
Or maybe run Proxmox ? so you can have FreeBSD VM, Linux VM and if you need Windows VM as your system is kinda beefy :)
 
For video cards - if you dont need anything fancy, no 120Hz-144Hz -4k - i would get something like nvidia quadro p600 / p620 - runs directly from PCIE but if you thinking to run something more intensive and try local LLM`s - 3090 with a bit of undervolting to make less power hungry and runs cooler ( turns out its a sweet spot for LLM`s due to 24GB VRAM and you can get them now for around 500-700$ in the states) as still you will be running some VM`s with gpu pass-trough or you can get cheap old quadros with up to 24 VRAM but it will be slow but most of them will need cooling and most of them are designed for server racks but you can find plenty of 3D files to print out fan holders.
Or maybe run Proxmox ? so you can have FreeBSD VM, Linux VM and if you need Windows VM as your system is kinda beefy :)
Thank you for your comments! I actually ordered an RTX 4000 Ada.
 
This should be fun. P.s. SFF one ?
The regular sized one.
Now you got me thinking re: Proxmox. It was our intention to set up FreeBSD on these machines, and also set up bhyve with Linux on a VM. However, it is our understanding (I'm just a C++ programmer) that it is kind of complex to pass through the GPU on bhyve to a Linux VM, whereas in Proxmox, it is a relatively simple process to pass through the GPU to a Linux and FreeBSD VMs? Given the config of these machines, they would only need additional RAM.
 
Correct its not complicated as bhyve. As you do have ala quadro card - you can have one card running on your FreeBSD,Linux and Windows VM`s.
 
Ok, so here's the update. But before that, I just wanted to thank everyone how responded to this thread. This very much ignorant programmer is very thankful to you all.

As suggested by Phishfry and @cracuer@ , we're opting for U.2 drives. However, my boss is still raising an eyebrow for their higher cost. I do have some questions, that maybe somebody can help me with:
1. Are Intel Optane U.2 drives any good? They seem to be discontinued, but are at a lower price than what was my first choice (the suggested Samsung 9A3).
2. The Supermicro board has 4 PCI-E 4.0 NVMe x4 internal ports, so 4 U.2 drives will go there, in a zpool.
3. I still have the Areca 1886 RAID Card. I guess some time in the future, I could use that for attaching some decomissioned regular hard drives if needed for locally accesing a replica of the PostgreSQL we use for development.

However, there's something I'm confused about: That Areca card requires a PCIe x8 slot. I just spent some time reading its docs, and Areca states that it's a trimode card, with 2 ports. Docs state that I can attach 4 U.2 drives to the card, instead of the 16 hard drives. At 4 lanes per U.2 drive, that's 16. How can those U.2 drives perform at NVMe speed if they're connected via an x8 slot? When I asked Areca tech support, this is what I got:
"No onboard PCIe switch, the onboard processor have 16 lanes to devices, 8 lanes to PCIe slot. So each NVMe drive on 1686-4NOD can work with x4 speed."

Thanks for reading!
 
Optane drives seek multiple times faster than conventional SSDs.

However, there's something I'm confused about: That Areca card requires a PCIe x8 slot. I just spent some time reading its docs, and Areca states that it's a trimode card, with 2 ports. Docs state that I can attach 4 U.2 drives to the card, instead of the 16 hard drives. At 4 lanes per U.2 drive, that's 16. How can those U.2 drives perform at NVMe speed if they're connected via an x8 slot? When I asked Areca tech support, this is what I got:
"No onboard PCIe switch, the onboard processor have 16 lanes to devices, 8 lanes to PCIe slot. So each NVMe drive on 1686-4NOD can work with x4 speed."

If there is no PCIe switch on the card you indeed need x16 for 4 x x4 U.2. Support is confused as usual.
 
"No onboard PCIe switch, the onboard processor have 16 lanes to devices, 8 lanes to PCIe slot. So each NVMe drive on 1686-4NOD can work with x4 speed."
Indeed very confused tech support. There has to be some chip in between if they underprovision by half.
 
Back
Top