Repurposing a Supermicro server board as software development workstation

Hello,
I will be getting access to a dual processor Supermicro H12DSi-NT6 board that was part of NAS server at work, and need to repurpose it as a developer's workstation.

Board currently has 2 AMD Epyc 7352 CPUs, 128 GB RAM, and an Areca 1886 RAID Card.

I was planning on dumping the Areca card, and getting one Highpoint Rocket 1508 NVMe AIC HBA, or better yet, two Highpoint Rocket 1504 NVMe AIC HBA, for a total of 8 NVMe modules. I could then have 4 NVMe modules on one zpool and have / there, and the other 4 modules on a different zpool for the programmer's data, including at least 1 Linux VM.

Does this make sense, or maybe there's a better way to do this?

Workstation will be mainly used for C and C++ programming of digital signal processing and imaging applications. Said applications will need to access a local PostgreSQL database. The programing will also involve a lot of Java code, and since Eclipse is no longer supported on FreeBSD, programmer will need to run a Linux VM, most likely on bhyve.

Need to think also of decent but relatively inexpensive GPU card.

Thanks in advance for reading, and for you comments.
 
Highpoint Rocket 1508 NVMe AIC HBA
I don't think I would go this route.

Please don't get offended but I think M.2 is best for consumer goods. Even good Enterprise ones at 110mm.

You should consider two paths:

1) Real AIC with memory onboard. Samsung PM1735 or better. Make it a pair for zmirror.

2) U.2 or U.3 NVMe drives in some sort of drive cage arrangement.
 
Even within the Samsung AIC family you must really pay attention to the specs.

For instance the =<1.6TB drives don't preform nearly as well as the 3.2TB and greater sized drives.

This is true with Samsung AIC and U.2/U.3 drives.

Look at the IOPS difference here:
 
I don't think I would go this route.

Please don't get offended but I think M.2 is best for consumer goods. Even good Enterprise ones at 110mm.

You should consider two paths:

1) Real AIC with memory onboard. Samsung PM1735 or better. Make it a pair for zmirror.

2) U.2 or U.3 NVMe drives in some sort of drive cage arrangement.
Thank you *very much* Phishfry for your comments and suggestion. You're completely right, plus you apparently saved my team some money. After reading your comments, I went over the board manual for a 2nd time. It turns out that it has 2 mini-sas ports, and on each I can connect a SFF-8654 (x8) to 2 x U.2 SFF-8639 cable, for a total of 4 U.2 drives.

If we need 4 additional U.2 NVMe drives, which HBA would be a good candidate?

Thanks!!
 
If we need 4 additional U.2 NVMe drives, which HBA would be a good candidate?
Using a modern motherboard as you are no HBA is really required.
They use dumb paddlecards. Simply wired straight through PCIe signalling.
So it comes down to the connectors. For that you want to consider the end user- the drive bays.

So you want to make a decision there. Are Hot Swap bays needed or not.

One of IcyDocks 4 bay NVMe solution uses SFF8643 cabling and there are other brands bays with Oculink connectors.

The cheapest route is no hotswap and mount 4 drives (with ample spacing) in a fixed drive cage.(fan in front nice)
Then use SFF-8639 drive cabling to a NVMe paddlecard. Requires messy wiring compared to HotSwap cages...
You can see why AIC's are a better choice from a wiring standpoint. If you have empty PCIe slots consider AIC first.

Another AIC that people rant about is Intel Optane. They discontinued them but they are still hot in the market.
P4800X and P905 maybe good for your database requirements. The small drives have insanly high DWPD. Like 30.
Intel P4600 can be found at descent cost.
 
Phishfry , diizzy
Phishfry , diizzy -- thank you very much for your comments. Because of them, I started to dig a little more on the hardware side of things (I'm just a programmer :) ). I just learned about PCIe bifurcation (splitting), and realized that the Supermicro H12DSi-NT6 supports bifurcation, so no PCIe switch needed. I do have 2 PCIe 4.0 x16 slots free.

Any suggestion for a decent and relatively inexpensive video card?
 
I second the notion of skipping consumer SSDs and get 1 or 2 used U.2/U.3 drives.

Nice machine. Let us know what `make world` is like.
 
You mean the video card?

For testing the developed imaging applications. Lots of image processing, openGL.

Thanks again for reading (and replying).
It depends on what you consider cheap. Nvidia have official drivers for FreeBSD, but AMD works quite well depending on the model (amd drivers are open-source so no surprises).
 
I needed something better so I bought an NVidia Gigabyte GTX1060 6GB locally for $50

I am sure you can find faster but it does good for the money.

You do have full height PCIe slots???
 
Is there an idiot's guide to using U.2 on Supermicro boards/servers?

I'd like to try but I'm just not sure what bits are needed - I've got various SM machines like 510 and 111 - does it need a backplace or special cables/drive bays or some card?

I feel like the answers are in this thread (it is about Supermicro and U.2) but I'm still not able to join up the dots.
 
rrpalma
There is a basic "video card" built in via the BMC chipset but are you talking about something to use for the Linux VM via bhyve? GPU passthrough support is very limited and as far as FreeBSD goes your best bet is a few years ago AMD video card. In general I'd say grab a cheap Intel ARC but I'd guess there is still a few months to go before support gets imported (needs a newer Linux kernel backend).
 
This is a great idea cracauer@ -- thanks. Showstopper though might be that U2 drives are much more expensive than M2, from what I just checke. So the question for you and Phishfry is how much worth performance and reliabilty wise are the U.2 ? All our developers' machines are constantly synced against a NAS.
 
Decent nvme drives will be fine, just use ZFS mirroring or whatever you prefer if you're concerned about data loss. I fail to see how the extra IOPS will help in this case.
 
This is a great idea cracauer@ -- thanks. Showstopper though might be that U2 drives are much more expensive than M2, from what I just checke. So the question for you and Phishfry is how much worth performance and reliabilty wise are the U.2 ? All our developers' machines are constantly synced against a NAS.

Well for starters the U.2/3 drives usually have power loss prevention.
 
(Sorry for my thread hijacking, rrpalma, but I think we are sort of looking at the same thing.)

I've got a single-port Startech version of that 4 port adaptor, and I got a cable from the internet, and I have a Samsung U.2 NVMe drive, and I have nda showing on an old Dell desktop that I'm playing with, so I'm making some progress.

The cable looks like this (can't swear it is this exact one): https://www.amazon.com/IO-Crest-SFF-8639-MiniSAS-SFF-8643/dp/B06WP2FXSS

I didn't get it working until I realised I have to use both connectors on the U.2 end - like I said I've been missing the very obvious!

So one end goes into the PCIe adaptor, and on the other end, one connector plugs into the drive, the other into a port to get SATA power (I thought this connector was some optional extra, not a vital part!)

But when I look at e.g. the Supermicro 111 machine and I imagine I've got the 4 port PCIe adaptor, and I imagine I've got the four cables - where/how do I plug those into Supermicro's drive bays? Or is this why you suggest the four drive drive caddy, cracauer?

If I just wanted to give Supermicro all my money, what do they suggest? I'm trying to understand how it's "meant" to work - what parts I'm missing (both in terms of my mental understanding and the physical parts needed in the server!)
 
(Sorry for my thread hijacking, rrpalma, but I think we are sort of looking at the same thing.)

I've got a single-port Startech version of that 4 port adaptor, and I got a cable from the internet, and I have a Samsung U.2 NVMe drive, and I have nda showing on an old Dell desktop that I'm playing with, so I'm making some progress.

The cable looks like this (can't swear it is this exact one): https://www.amazon.com/IO-Crest-SFF-8639-MiniSAS-SFF-8643/dp/B06WP2FXSS

I didn't get it working until I realised I have to use both connectors on the U.2 end - like I said I've been missing the very obvious!

So one end goes into the PCIe adaptor, and on the other end, one connector plugs into the drive, the other into a port to get SATA power (I thought this connector was some optional extra, not a vital part!)

But when I look at e.g. the Supermicro 111 machine and I imagine I've got the 4 port PCIe adaptor, and I imagine I've got the four cables - where/how do I plug those into Supermicro's drive bays? Or is this why you suggest the four drive drive caddy, cracauer?

If I just wanted to give Supermicro all my money, what do they suggest? I'm trying to understand how it's "meant" to work - what parts I'm missing (both in terms of my mental understanding and the physical parts needed in the server!)

Not sure I understand. What Supermicro drive bays?
 
I've done this kind of thing in the past, albeit with big xeon server boxes that were being retired, rather than epycs. You get nice fast compilation times on the big server chips with lots of cores using parallel make, and you have lots of ram and storage for all your code and data. And if you have a small development team (or even a large team) they can all use the same build server. Your epycs have 32-cores each, so you can "make -j 64" (or probably more) to saturate the cpus, which is going to compile you C code at lightning speed :cool: . I would put as much RAM in that box as you can, and as it probably uses ECC ram you can get that cheap as server pulls from ebay, etc. So max out the RAM, 128GB isn't all that much, 512 GB would be better. Of course it all depends how many people will be using the box.

Don't bother putting a graphics card in the server. Get a small desktop PC and use that as the front end, and ssh in to the development/build server box over the lan. One of the current crop of mini-pc's would be ideal to use as a desktop, you can drive 3-off 4k monitors even from a cheap N100 mini pc, which is the low end, and the next step up would be NUC's or similar equivalents. Then you can use the desktop box effectively as an x-terminal and run an entire desktop, kde say, over the lan on the server box, either using ssh or just straight X (X -query...). There are lots of howtos on the web to tell you how to run a remote session over X11. A bonus is that when you get home you can ssh into the same development server box and do the same trick over your broadband. Check out freenx too, if there is a freebsd port. Or if you don't want to run the entire desktop over X11, you can of course just ssh in from terminals.

Hmm... however... image processing/DSP. Maybe you do need a gpu card after all. It all depends what you're doing, for example, does your image processing code run on the gpu itself? I just wanted to make you aware of what you can do with remote X. It gets harder putting gpu cards into server boxes, if you want to work interactively and drive monitors with the graphics card. I've tried doing that kind of thing in the past, I generally found out it was more trouble than it was worth. If you need a high power gpu, I would get a good workstation grade desktop box as a platform for the gpu card and monitor, and use the server as a build server to get fast build times and for large data storage, and split it that way; that's what I have found the most effective way to use an old server for development in the past. Trying to make a server into a graphics workstation tends to be more trouble than it's worth. And if you're connecting the monitor to a graphics card installed in the server, and sitting next to it, the fan noise is going to be a real pain in the elbow when you are trying to get work done. Personally I would put the server in a different room, preferably on another floor in a closed cupboard, and talk to it over the lan. You need a nice quiet environment to develop your code in, so that you can hear yourself think :).

As far as storage goes, don't bother getting optane, its obsolete so it's false economy even if it appears cheap; get a decent current nvme card and some decent current ssd's. Compatibility is everything.

P.S. why write the front end in Java, if that's what you're doing? Use gtk or qt. Why do you need java at all??? :) Oh... is this thing going to run cross-platform?
 
I have six SuperMicro Xeon that came to me as manna from heaven.
They were scheduled to be destroyed (for a fee), so I was asked if I wanted to come pick them up for free.
One was an X9SRA, the others were X8DAL-I

Over the years, I have had 100% success with SuperMicro boards for reliable use.
I do not overclock, and run everything at designed speeds.

These are ECC capable boards, and that is what I use.
 
Not sure I understand. What Supermicro drive bays?
They are called drive carriers in the Supermicro documentation.

So e.g. looking at this system:


The front of the system has "8 hot-swap 2.5" SAS/SATA bays". I can lift the top cover off the machine and I see the backplane (if that's the right terminology).

So I say "right, let's get four U.2 drives in here", I can fit the PCIe adapter into a PCIe slot, then fit the four drives into four of those bays with the Supermicro carriers/caddies ... but then how do I connect the cables from the adapter card to (a) the drive and (b) the power?

Do I need a different backplane? Or do I "just" need to remove the backplane?

The parts list says "Backplane BPN-SAS4-LB13A-N2 1U 8-Slot 2.5".

It must be so simple but I'm not seeing it and I've not managed to find anything helpful on the internet (but probably not searching for the right keywords).
 
Back
Top