Recommendations for a motherboard

I was hoping to build my own file server a few years ago and got as far buying a case and a power supply, but then other things got in the way. The case is a Fractal Designs Node 304 Mini-ITX box and am currently thinking about which motherboard to buy for it. The box will basically just run as a file server for my LAN, although I was wondering if twin NICs would speed up file retrieval.

Does anyone have any recommendations? It could be a used board
 
I've been using Gigabyte main boards for quite some time now. They advertise durablity and that seems to be true in practice. I recently retired several machines with Gigabyte main boards working over ten years without issue. Never had a single failure with one. The other brand I see mentioned a lot here is SuperMicro.
 
Twin Ethernet doesn't help with speed, if they're on the same physical network. Any one port can max out the network on normal machines. They do help if you use your machine as a router, or if you have multiple physical networks.

I would buy something that has enough SATA ports on the motherboard for all the disks you can imagine using, +1 or +2. That makes it easier to temporarily connect an extra disk for migration.

I decided, several years ago, that I didn't need much CPU horse power for my server at home, so I went with a 32-bit Atom. The nice thing is that the power consumption of my server tops out at ~35W, and I think most of the time it is around 10W or so. Need to measure what it really uses these days.
 
My really old NAS uses SuperMicro X7SPA-H-D525, in a 4 removable bay Mini-ITX chassis.
I wanted to use SAS drives so I added LSI-9212 for 4ea. SATA3/SAS2 drives. The mainboard was used back in 2014.
The SAS9212 is primarily a HP part. I like that it used commodity cables. Flash it to IT mode and that makes it a great controller.
I use it as an iSCSI fileserver but commonly power it down as needed.. I originally used it with NAS4Free.
I have two identical Buffalo TerraStations behind that as my real cold storage.
My always-on-storage is a little N270 MSI windbox with two enterprise 512GB Intel SATA-SSD in a gmirror and NFSv3 server.
Really old platform with only SATA2 but the thing is fanless and draws under 10Watts. I love it. The drives are more recent.

I have a 24 bay ZFS experiment stalled out while building out my rack
I have 2 each 24-bay Chenbro cases with 24 drives for the project.. Might do 12drives x 2 chassis.
Would like to do 24x SSD but I am not rich. The fact my 2U chassis use identical sleds was a real bonus.
 
I was wondering if twin NICs would speed up file retrieval.
That would depend on your network design. You could use an extra NIC to:
  1. create a private (back-to-back connection) to a single demanding client; or
  2. create a separate subnet to isolate the traffic to and from a set of clients (you would need a separate switch for those clients); or
  3. use lagg(4) to aggregate multiple NIC ports into a switch.
Depending on network design, 1 and 2 above would probably need routing enabled on your server.

You would want to understand your goals and have a design sorted before wading into link aggregation.

At home, I have private network connections between my ZFS server and my KVM server, and also between my ZFS server and MythTV server. The rest of the network is flat because there's just not enough other traffic to justify the effort of changing it.

I don't buy single NICs any more. I get used quad port Gigabit Intel NICs off the Internet (typically removed from IBM or Dell servers). They are cheap, and work well, buy only have 2 Gigabits total throughput (not 4 Gigabits, as you might hope). But don't buy new ones from the Chinese sellers for $US35 -- they are fakes, and prone to failure.
 
Code:
But don't buy new ones from the Chinese sellers for $US35 -- they are fakes, and prone to failure.

oh ... thats a shame. Was going to order a stack of them, because they do look shiny and new, I thought they might be old stock / unissued IBM kit .. which is a bargain for $45 AUD.

Ah well.

In my case, Im not too worried about actual throughput or CPU overhead - I just want a number of physical NICs to make it a bit safer/easier to have multiple virtual switches for development and lab purposes.
 
buy only have 2 Gigabits total throughput (not 4 Gigabits, as you might hope)
I think you have your numbers wrong.
For Gigabit Ethernet adapter each port each needs 125 Megabytes per second or 1 Gigabit.
So 500 Megabytes per seconds needed for a Quad Port card.
If you look at the chart here you will see PCIe 2.0 x1 supports 500 Megabytes per second.
All (real)Quad Intel Nics use a x4 interface giving you 4X the bandwidth you need at the bus.
Intel Gigabit Ethernet cards will do around 950Megabits actual without any filtering.

For Quad 10G you need 1250 Megabytes/sec X4 Ports or 5GB sec.
Meaning PCIe 3.0 x8 required for full speed for these cards.
 
I think you have your numbers wrong.
I wish I did.

The cards I have identify as "Intel(R) PRO/1000 Network Connection". I'm fairly sure that this is the adapter. Mine are out of IBM servers.

One, two, and four port models exist. They identify on the PCI bus with lspci(8) as:
Code:
Ethernet controller: Intel Corporation 82571EB/82571GB Gigabit Ethernet Controller (Copper) (rev 06)
Each port is capable of gigabit throughput.

The ones with one port do 1 Gbit total aggregate and the ones with 4 ports do 2 Gbits total aggregate. Same results on both Linux and FreeBSD. I don't have any two-port models to test (but I'll bet that they only do 1 Gbit).

I looked into this, disappointed, and believe it's down to the 82571EB Ethernet controller chips on the card. Single port cards have one, dual port cards have one, and quad port cards have two.

Different model Intel cards may use different Ethernet controller chips, and may behave differently.
 
OK I see. Yes some of the older Intel Quad cards required a PCI multiplexer onboard.
I just tested a Silicom Quad Intel SFP card I was trying to use to connect my SG300 on my new switch build.
It used a PLX controller and it seemed rather dogged. I was wonder if it was some kind of bypass feature.
Now I switched to a Silicom i350 based dual SFP and that worked nice for interconnecting old gear.
I also have a pair of IBM 4 port RJ45 Intel cards in my virt server. They work fine. FRU39Y6138
Code:
ppt2@pci0:132:0:1:    class=0x020000 card=0x10bc8086 chip=0x10bc8086 rev=0x06 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = '82571EB/82571GB Gigabit Ethernet Controller (Copper)'
    class      = network
    subclass   = ethernet
 
Here is the Quad Intel Silicom PE2G4SFPI6L-R SFP Card
Not recommended at all. Notice the proceeding PLX chip for it. Poor performance.
Maybe interference from Supermicro onboard PLX on X9SRL mobo.
Code:
pcib7@pci0:5:3:0:    class=0x060400 card=0x861710b5 chip=0x861710b5 rev=0xba hdr=0x01
    vendor     = 'PLX Technology, Inc.'
    device     = 'PEX 8617 16-lane, 4-Port PCI Express Gen 2 (5.0 GT/s) Switch with P2P'
    class      = bridge
    subclass   = PCI-PCI
igb0@pci0:6:0:0:    class=0x020000 card=0x10e78086 chip=0x10e78086 rev=0x01 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = '82576 Gigabit Network Connection'
    class      = network
    subclass   = ethernet
 
Dual Port Gigabit SFP card Silicom PE2G2SFPI35 that performs up to snuff.
Code:
igb4@pci0:12:0:0:    class=0x020000 card=0x15228086 chip=0x15228086 rev=0x01 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = 'I350 Gigabit Fiber Network Connection'
    class      = network
    subclass   = ethernet
 
I just realized I was off by a factor of two. Gigabit Ethernet requires 2 Gigabits throughput for Full Duplex.
So 250 Megabytes/sec per port for a total of 1 Gigabyte of bandwidth required. PCIe x2 lanes required.
8 Gigabit total aggregate for a Quad Gigabit NIC.
Actual testing shows they deliver 930-950 Megabit in one direction when network overhead is considered.
 
Back
Top