Bad performance using bridges

Hi

Asking this as I think of using bridges (use of bhyve).

The question has its rots in in this paper https://people.freebsd.org/~olivier...FreeBSD_for_routing_and_firewalling-Paper.pdf

And I refer to the performance comparison between bridges vs no bridges:
--snip--
The massive performance degradation (-63%) is a big
surprise: if_bridge code is using lot’s on non-optimised
locking mechanism. Its usage needs to be avoided.
--snip--

Question:
I wonder if anything has been done to optimize the if_bridge codebase between 11.1 and 12.x (No. I have not gone through the source code and checked myself)


Thanks in advance
/Peo
 
Not that I know of. I am using routed networking with bhyve. I added two 4 port Intel LAN cards. Hook each one up to a switch.
Passthru each NIC to bhyve.

The way packets co-mingle on bridges never thrilled me. Routed networks are superior.

Most 10G supported cards support VF's for bhyve as well. See iovctl(8)
I am trying it with Intel x540 and Chelsio T420's.
They both provide VF interfaces when activated.

That has been my approach. Keep onboard NIC's for hypervsior management and add NIC's for clients.
To me a bridge is nothing more than a party line. Add a tap and I am sure you are losing speed.
 
I have to read about iovctl... I don't know anything about SR-IOV and what it gives. Tnx for the hint.

So you use VT-d and have one physical NIC for each vm? That approach will generate heat if using many 10GBase-T in the server (Less heat generation with SFP+). Also, you could not have that many vm:s with this approach. It will also be much more expensive. But on the other side... you will not have any performance degradation due to bridge usage :)

I maybe go for your approach as a start, but would ideally prefer one NIC trunk with VLANs on it and tie vm:s to each VLAN. But that requires bridges. So.... I would very much appreciate if someone that have info about this bridge performance issue could comment.
 
So you use VT-d and have one physical NIC for each vm?
Yes right now that is my setup. 2 cards i350-4 provide gigabit ethernet to my VM's using ppt pass thru.
I am just experimenting with VF/IOV for now. An IOV capable NIC card provide lots of interfaces.
128 on the Chelsio if I remember correctly.
 
Phishfry

Have tested SR-IOV now. Either set the VLAN on the VF in the host or config the VLAN in the guest. Both work. So in my test I have removed the bridges in favour for SR-IOV VF:s and uses VLAN tagged VFs only.

Really good solution. Thanks for the idea!
 
Totally useful information Phishfry thanks for the ideas above. Me too, I think, I'm going to test SR-IOV capable NICs however one question;

So, shortest; The overall approach is to use an IOV capable NIC card that provides lots of virtual interfaces for the VMs, assign each "virtual" interface to each VM and create a virtual switch for all the VMs, right?

So when you said "I added two 4 port Intel LAN cards. Hook each one up to a switch." do you mean having a virtual switch? (created by vm switch create) or physical one?

Thank you.
 
Back
Top