SR-IOV an configuring transparent VLANs

Hi

If using a linux host, you can use the NIC:s capabilities of transparent VLANs (it its SR-IOV feature set has it). It means (if the host has a VLAN trunk) that you could set a VLAN tag on the VF that you then hand out to the guest. Then the guest sees this as a non VLAN:ed interface. I have used this a lot with a linux host running KVM.

I would definitely not let the VM guest root user set VLAN tags on the VF. Then the guest VM admin could place the host on any VLAN in the trunk.

How do I config transparent VLAN on the SR-IOV VF:s in FreeBSD?
 
I will probably revert to Linux/KVM on the hypervisor host as FreeBSD have no way to solve my issues. This is too bad, because the actual VMs is *very* responsive and snappy in bhyve. Better than KVM. Just the networking issues I mention… It seems there is no work around at this time if using FreeBSD. Or is there?

I need one or the other of the following:
* A bridge that has good network performance
When attaching a vlan interface to it and use this for the VMs the network performance is not good. A know problem due to the locks in the bridge code.

or

* Transparent VLANs on the VFs when using SR-IOV
Setting a VLAN tag on the VF that is handed out to the VM (an SR-IOV feature). Then this tag is automatically stripped so the VM does not have to have any knowledge of VLANs. Little like an access port in a switch. This is soemthing you want to do if there is a trunk interface to the hypervisor host. This support seems to be non-existent in the FreeBSD drivers for Chelsio or Intel, but works in both Chelsio and intel drivers on Linux


If anyone can come up with a workaround or point me in a direction to solve this that I have not seen, I will be very glad

/Peo
 
Did you figur out a way to do this? Looking for the exact same thing. Ever since BSD10 network performance got worse with each major release. First I thought this was due netmap introduction but today tried kernel without netmap support. At least on the host system.
 
Did you figur out a way to do this? Looking for the exact same thing. Ever since BSD10 network performance got worse with each major release. First I thought this was due netmap introduction but today tried kernel without netmap support. At least on the host system.

Hi @Ofloo I had some communication with the author of the existing SR-IOV additions to the FreeBSD driver. He is also the one who wrote the SR-IOV presentation for FreeBSD that you can find. SR-IOV works, but just the basic features. The problem is that I personally think that transparent VLANs is a basic feature. And you need them to avoid using bridges. Otherwise you have to let VM owners set the VLAN tag. no no no...

I saw in the driver source code that there are an embryo code for this. But is seems to have stopped in early development. I also spoke (by email) with the intel maintainer at intel. This ended up with that I started to code this addition myself. But due to lack of spare time I came to an alpha state and stopped there. The project did not seem too hard to land if the parameter "available spare time" had a better value :)

I use FreeBSD a lot (Firewalls, public DNS servers, mail relays etc). But on the main virtualization servers I backed out to use a Linux kernel. I still use a FreeBSD virtualization server for test purposes though.

What the status is today on SR-IOV I do not know. As you can see, my original post is from may 2019. So hopefully someone has done something in this area over the last year.
 
I understand how you feel been trying to do this ever since FreeBSD 10, first wanted to try this in FreeBSD 11 with IO-SRV back then the interface didn't even work.

Isn't there a if_bridge patch we can test or something?
 
I suppose you migt be able to work with io-srv and setup vlan on managed switch using mac-vlan, but that still means i believe that vms are able to sniff/tcpdump traffic.
EDIT:
Did some digging and found this https://people.freebsd.org/~kp/pf/ and more importantly this https://people.freebsd.org/~kp/if_bridge/stable_12/

Result

Don't hold your breath i would say i've applied the patches to 12-STABLE on 10 core scalable processor with 128gb of ram and performance is even worse.

FreeBSD RELENG12.1p5

Code:
# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[  4] local 10.13.35.235 port 5001 connected with 10.13.35.236 port 10000
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  1.81 GBytes  1.55 Gbits/sec
[  5] local 10.13.35.235 port 5001 connected with 10.13.35.236 port 10001
[  5]  0.0-10.0 sec  1.80 GBytes  1.55 Gbits/sec
[  4] local 10.13.35.235 port 5001 connected with 10.13.35.236 port 27898
[  4]  0.0-60.0 sec  10.4 GBytes  1.49 Gbits/sec

FreeBSD 12-STABLE r361623M with improved if_bridge

Code:
# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[  4] local 10.13.35.235 port 5001 connected with 10.13.35.236 port 21053
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  1.16 GBytes   998 Mbits/sec
[  5] local 10.13.35.235 port 5001 connected with 10.13.35.236 port 21086
[  5]  0.0-10.0 sec  1.36 GBytes  1.17 Gbits/sec
[  4] local 10.13.35.235 port 5001 connected with 10.13.35.236 port 30839
[  4]  0.0-10.2 sec  1.26 GBytes  1.06 Gbits/sec
[  5] local 10.13.35.235 port 5001 connected with 10.13.35.236 port 60715
[  5]  0.0-60.0 sec  7.40 GBytes  1.06 Gbits/sec

Both are on same subnet on a bridge with vlan. i'm not saying they didn't improve just that it probably doesn't apply to our case. And no there is no load on the system. Only those 2 bhyve vms run on it, to do some tests.
 
Back
Top