Solved Can a FreeBSD bridge outperform off the shelf switches?

I know the question sounds weird and I can already hear you saying things along the line of "Buy a proper switch!" But hear me out first.

I must connect two 10GB fibre networks and happen to have an old, forgotten PC somewhere. Of course, a dual port 10GB fibre PCI-E card will cost much less than a new 10GB backbone switch and I know I can move packets between interfaces using the bridge interface.

Question #1:
Does this make sense? To be more verbose, is the bridge interface intended to be used in cases like this?

Question #2:
Would doing such a thing incur a performance penalty or would the box excel even the best backbone switches thanks to its faster CPU and larger memory?

Buy a proper switch!
Fine! I will! But still I'd like to know.
 
The Xeon E5 2650L have around ~24Mpps according this wiki
What is interesting to me is that alot of tuning guides suggest disabling hyperthreading. This one does not.

I have phase one of my megarouter setup and currently in use. (X9SRL/2650LV2)
Cable modem to megarouter.
It uses pf firewall with T540 Chelsio internal network with all 4 ports LAGG'ed to a Cisco SG500X for top of the rack.

What I plan on next is adding 3 more T540 for a high speed network among 4 servers.

Not sure how to segregate what goes over copper ethernet and what goes out over fiber.
For starters I was thinking NFS is configured via IP network so I make a separate network on different range.
So I would have a high speed NFS network for my three servers.

There is no doubt a hardware ASIC is faster.
It is purpose built silicon versus general purpose OS on commodity hardware.

That is not enough reason to roll your own. The shear flexibility of the Chelsio hardware and the special software they offer is unparalleled. Just look at thier iSCSI accelertion stuff.

I don't mean to be such a Chelsio fanboi but thier FreeBSD support is phenomenal. Look at the tools in source:
/usr/src/sys/dev/cxgbe
 
Notice my post was directed at the SAN side of networking.

On the core of my megaswitch project it is an internet facing gateway.
So I disabled hyperthreading because it is publically exposed.
So I am compromising performance right there. I have also enabled CPU microcode updates. More loss.

So building you own switch could mean anything. There are so many types of switches.
Being able to control it all is what I want. I like my Cisco 48 port gigabit switch. I have no problem with that.
And using it I can reboot megarouter and my internal networks still functions.
 
Question #1: Bridges are undesirable.
You don't want bridges except perhaps for transparent proxy filters /snort/squid.
You want a routed network. Use a packet forwarder.
Bridges will reduce speed.
FreeBSD has several speciality networking features worth investigating.
netmap(4) offers a high speed virtual switch. Look at the examples.

The term “switch” is not strictly defined by any standard, such as IEEE 802.1X. It is more like a commercial term rather than a technical term. This is why the term “switch” can be moved from layer 1 all the way to layer 4 of the OSI protocol model. Quite often, the term “switch” becomes interchangeable with the terms “hub,” “repeater,” “bridge,” and “router.” It is really dependent on a switch vendor’s valuation for its customers or consumers. If the term “switch” is more valuable than “hub,” “repeater,” “bridge,” or “router,” it will be called a switch.
 
Would doing such a thing incur a performance penalty or would the box excel even the best backbone switches thanks to its faster CPU and larger memory?
I don't see offloading the data through PCIe, running it through a CPU, and then sending it back out over PCIe ever beating a dedicated fabric without maybe 10 or 20 years of progress.
 
Back
Top