Solved Hardware for router

Hi!
I was not sure if I should put this to networking or off topic. I am looking for a relative cheap ITX motherboard and computer case which can handle 2x8 PCI-E cards with bifurcation without cutting metal or other kind of tinkering. I'd like to have something small for my home network, but I need 2 SFP+ ports (10 Gbps) and at least 2 RJ45 ports (1 Gbps) along with the 1 RJ45 (1 Gbps) that comes with the motherboard and splitting a PCI-E x16 into 2x8 appears to be the best solution. I heard that a PCI-E riser is not enough and I need a PCI-E splitter along with a motherboard and CPU that supports PCI-E bifurcation if I want to solve this with a low budget. I tried to research this topic, but I did not find any good article or video that discusses it. I am curious if anybody solved this problem and has such kind of router + firewall at home?
 
This kind of sounds like a request for a unicorn: Must be esoteric and cheap.

Why does the router need a link that huge? You have a 20GB connection to the internet? The monthly bill is going to cost more than the router.
 
This kind of sounds like a request for a unicorn: Must be esoteric and cheap.

Why does the router need a link that huge? You have a 20GB connection to the internet? The monthly bill is going to cost more than the router.
What makes you assume he needs the 10Gb NICs for his internet uplink? Maybe he wants it for his NAS, or some other local connection.

I have connected my workstation to my home server (which also acts as a file server) with 10Gb ethernet. Copying ISO images around and things like that is a lot more fun than with the old Gbit ethernet. ;) Once you have fast media like NVMe SSDs, the network really becomes the bottleneck. And as a nice side effect, when you use an optical fiber cable with 10Gb, it is immune against interference with other cables.
 
Hi!
I was not sure if I should put this to networking or off topic. I am looking for a relative cheap ITX motherboard and computer case which can handle 2x8 PCI-E cards with bifurcation without cutting metal or other kind of tinkering. I'd like to have something small for my home network, but I need 2 SFP+ ports (10 Gbps) and at least 2 RJ45 ports (1 Gbps) along with the 1 RJ45 (1 Gbps) that comes with the motherboard and splitting a PCI-E x16 into 2x8 appears to be the best solution. I heard that a PCI-E riser is not enough and I need a PCI-E splitter along with a motherboard and CPU that supports PCI-E bifurcation if I want to solve this with a low budget. I tried to research this topic, but I did not find any good article or video that discusses it. I am curious if anybody solved this problem and has such kind of router + firewall at home?
The problem is that you want three things at once:
  • Four (or five?) network ports, two of which are SFP+.
  • Small form factor (ITX).
  • Cheap.
You can have any two of those things without problems. But all three at once is not possible, I’m afraid.

So, let’s ignore the “cheap” part for now. ;)
There are quite a few ITX boards that can do what you want, but it won’t be cheap.

For example, the Supermicro X11SSV-M4 has Quad GB Ethernet onboard (i.e. four RJ45 sockets on the back) and a PCIe 3.0 x16 slot, so you can put a Dual SFP+ card there. The PCIe slot even supports bifurcation if you ned that.

Alternatively, the X10SDV-16C-TLN4F+ already has Dual SFP+ and Dual GB Ethernet (RJ45) onboard. If you need more than that, it also has a PCIe 3.0 x16 slot (although I’m not sure if it supports bifurcation).

(By the way: The standard ITX form factor actually allowed two expansion slots. But the problem is that nearly all ITX mainboards today are actually Mini-ITX boards that allow only one slot.)
 
The problem is that you want three things at once:
  • Four (or five?) network ports, two of which are SFP+.
  • Small form factor (ITX).
  • Cheap.
You can have any two of those things without problems. But all three at once is not possible, I’m afraid.

So, let’s ignore the “cheap” part for now. ;)
There are quite a few ITX boards that can do what you want, but it won’t be cheap.

For example, the Supermicro X11SSV-M4 has Quad GB Ethernet onboard (i.e. four RJ45 sockets on the back) and a PCIe 3.0 x16 slot, so you can put a Dual SFP+ card there. The PCIe slot even supports bifurcation if you ned that.

Alternatively, the X10SDV-16C-TLN4F+ already has Dual SFP+ and Dual GB Ethernet (RJ45) onboard. If you need more than that, it also has a PCIe 3.0 x16 slot (although I’m not sure if it supports bifurcation).

(By the way: The standard ITX form factor actually allowed two expansion slots. But the problem is that nearly all ITX mainboards today are actually Mini-ITX boards that allow only one slot.)
Actually I figured it out meanwhile. It is possible to split PCIE with an Asrock Z or X board that supports it and with a riser card e.g. the one in the Fractal Design Node 202 case, but only these 2 parts will be more than $200 and I need a lot of other parts too. So I thought about it and I think I won't need a splitter, just a single SFP+ dual port card and a regular ITX motherboard and case, which are a lot cheaper. As of the 2 RJ45 ports I'll use USB3 -> RJ45 adapters and problem solved. I don't have 10 Gbps WAN, I just need this for the local network. In the long run I'll buy a new motherboard with 10 GbE if I upgrade the modem to higher bandwidth, but for now this would suffice and it is cheap. :)
 
Actually I figured it out meanwhile. It is possible to split PCIE with an Asrock Z or X board that supports it and with a riser card e.g. the one in the Fractal Design Node 202 case, but only these 2 parts will be more than $200 and I need a lot of other parts too. So I thought about it and I think I won't need a splitter, just a single SFP+ dual port card and a regular ITX motherboard and case, which are a lot cheaper. As of the 2 RJ45 ports I'll use USB3 -> RJ45 adapters and problem solved. I don't have 10 Gbps WAN, I just need this for the local network. In the long run I'll buy a new motherboard with 10 GbE if I upgrade the modem to higher bandwidth, but for now this would suffice and it is cheap. :)
Be sure to buy a good SFP+ card that is well supported by FreeBSD. I think the least expensive ones that work well with FreeBSD are Intel-based NICs (the single-port ones that I have cost about 100 €).

USB3 Gbit Ethernet adapters don’t work well with FreeBSD, unfortunately. You probably won’t be able to get 1000 Mbit/s out of them, and the CPU load will be high, compared to PCIe NICs. So better get an ITX board that already has the RJ45 ports onboard that you need.

(Also see this message that i posted about two months ago in another thread.)
 
Be sure to buy a good SFP+ card that is well supported by FreeBSD. I think the least expensive ones that work well with FreeBSD are Intel-based NICs (the single-port ones that I have cost about 100 €).

USB3 Gbit Ethernet adapters don’t work well with FreeBSD, unfortunately. You probably won’t be able to get 1000 Mbit/s out of them, and the CPU load will be high, compared to PCIe NICs. So better get an ITX board that already has the RJ45 ports onboard that you need.

(Also see this message that i posted about two months ago in another thread.)
I ordered 2 Mellanox MCX311A cards from Aliexpress for now. I am still thinking about which 2 port card to choose for the router. I'll use my server as router with a virtual machine for a while before I'll buy a dedicated hardware for it, so it can wait till next year. I figured meanwhile that I can do PCI-E splitting and bifurcation with many motherboards that have X and Z chipsets by using a riser card (e.g. the one in fractal design node 202). It is an available feature by most Asrock motherboards, but to reduce the costs I think the best solution is using a regular mITX motherboard with a 2 port SFP+ card and solve the 1 Gbps RJ45 with USB3-ethernet adapters. I already have a cheap USB adapter and it can handle around 700Mbps, so it is a good solution if I can give up some bandwidth on the slow wifi network to reduce costs. I think this can even cut the budget of the project in half, so I can do it by spending only $350-500 instead of $700-1000. Later these solutions will be problematic if the bandwidth of my internet connection goes beyond 1 Gbps, so I'll have to replace the motherboard with one that has an integrated 10GBASE-T to be able to connect it to the modem. But I think I still have 5 years before that, so it is not a real issue now and motherboards with 10 Gbps connection will become cheaper in 5 years.
 
I'd expect fake mellanox cards to die before "cheap" 10Gb copper shows up on motherboards.

Current modems beyond 1Gbps use LACP because 10Gb is too expensive to implement to only use 1/5 of the available bandwidth.

Don't understand why you just didn't use a quad 1Gb adapter (for mythical 1Gbps+ bandwidth possibilities), plus one onboard, and call it a day, particularly since your internet connection admittedly doesn't exceed that.
 
I'd expect fake mellanox cards to die before "cheap" 10Gb copper shows up on motherboards.

Current modems beyond 1Gbps use LACP because 10Gb is too expensive to implement to only use 1/5 of the available bandwidth.

Don't understand why you just didn't use a quad 1Gb adapter (for mythical 1Gbps+ bandwidth possibilities), plus one onboard, and call it a day, particularly since your internet connection admittedly doesn't exceed that.
"Fake Mellanox" works by others, so I don't expect them to fail. I don't have a big enough wall plate for another 4 keystone modules not to mention how ugly another 4 cables would look like. I need 10 Gbps for the local network, not for the internet. At least for now.
 
Requirements for whatever you're doing just keep on getting weirder, like 2x the cables being too "ugly" and 10Gb connection to router, but not to internet (sub 1Gb). Weird flex thread, imo.
 
Requirements for whatever you're doing just keep on getting weirder, like 2x the cables being too "ugly" and 10Gb connection to router, but not to internet (sub 1Gb). Weird flex thread, imo.
Well for me it is weird that you find it weird. It is a very simple home network and doing it with 1 optical cable instead of 4 UTP looks better no matter who you ask. I need to share data between my home server and my PC with NVMe SSD speed (at least 10Gbps), not with HDD speed (1 Gbps), so I can move everything to the server and access it just as fast as it would be on the SSD of the PC. Adding 10 Gbps switch to the network and a router with VPN support is just the next step, but I thought I rather build an own router instead of buying these to practice networking.
 
Not doubting the server to PC connection. It's the router with a LAN link that is an order of magnitude faster than the internet link.
 
Not doubting the server to PC connection. It's the router with a LAN link that is an order of magnitude faster than the internet link.
Yes, that could be solved with a separate router and a small SFP+ switch, but I decided to do it in a single box. For now this will be my server and later I'll just copy-paste the network settings to a dedicated small computer and maybe modify them a little bit. If later the internet bandwidth will be bigger than 1 Gbps, then I'll just upgrade the motherboard and keep the same settings. I wanted a system that is relative cheap and easy to upgrade...
 
You'll need quite beefy hardware if you expect to hit 10Gbps using SMB/NFS/iSCSI or whatever you want to use.
If you want a cheap router that runs FreeBSD and doesn't use ancient hardware perhaps a RockPro64 + Intel NIC would be suitable as a router at least? I have two boards running fine (i350-T2 NICs) but you "need" to run 13-CURRENT.
 
You'll need quite beefy hardware if you expect to hit 10Gbps using SMB/NFS/iSCSI or whatever you want to use.
If you want a cheap router that runs FreeBSD and doesn't use ancient hardware perhaps a RockPro64 + Intel NIC would be suitable as a router at least? I have two boards running fine (i350-T2 NICs) but you "need" to run 13-CURRENT.
Thanks! I am not sure about the hardware yet. I'll test it with my server and PC using the 1-port cards after they arrived. I expect a 10-15% load at most by sending data on the local network. There are many CPU benchmarks, so it won't be a big deal to choose CPU after the tests.
 
You'll need quite beefy hardware if you expect to hit 10Gbps using SMB/NFS/iSCSI or whatever you want to use.
I got my first 10gig ethernet cards sometime in 2006 or 2007. They were prototypes, built by some of the first small startups in that market (long before the big players had 10gig). We got two cards and a small switch (16 or 20 ports), and set them up on two motherboards, which had 1.5 or 2 GHz AMD Athlon CPUs (64-bit single core). That was still PCI-X, since PCIe didn't exist yet.

Within an hour we were running TCP links at 700 to 800 MBytes/s (fundamentally maxing out wire speed, with a reasonably safety margin), and within another two hours of work we were doing interestingly complex workloads (storage stacks, similar to iSCSI or NFS) over them at the same speed.

You don't need an enormous amount of CPU power to pump about 1 GByte/s around. A modern multi-core CPU can copy a dozen or two dozen GByte/s between IO and network, for 10gig ethernet you use only a small fraction of that.
 
You don't need an enormous amount of CPU power to pump about 1 GByte/s around. A modern multi-core CPU can copy a dozen or two dozen GByte/s between IO and network, for 10gig ethernet you use only a small fraction of that.
… unless you’re going to use USB3 10G adapters, as the OP mentioned.
 
Idk where you read that, I wrote about USB3 - 1G adapters...
I’m sorry, that was a typo, I meant USB3-1G, too. The whole USB stack with all of the protocol overhead isn’t really well suited for network, not even just 1G.
 
I’m sorry, that was a typo, I meant USB3-1G, too. The whole USB stack with all of the protocol overhead isn’t really well suited for network, not even just 1G.
Yes, I don't think I'll have full speed only probably 2/3 as far as I remember. I don't know about latency and packetloss, I need to test it. Maybe next weekend I'll have the time to tinker with the server.
 
… unless you’re going to use USB3 10G adapters, as the OP mentioned.
Conceded, using USB for high rate may not work. To get several dozen GByte/s, you need to be on PCIe, and you need to arrange your software stack to use techniques like zero-copy and RDMA.
 
Yes, I don't think I'll have full speed only probably 2/3 as far as I remember. I don't know about latency and packetloss, I need to test it. Maybe next weekend I'll have the time to tinker with the server.
Bandwidth and latency are not the only issues. It will also cause a considerably higher CPU load, compared to PCIe NICs.
 
It seems clear based on cheap, small, needs to look nice, and overpower the uplink that it doesn't really matter what any issues are as long as it "works."
 
Back
Top