bhyve bhyve network performance tap/bridge only

Dear @ll,

I'm wondering what the theoretical maximum bandwidth is for a tap bridged network between a bhyve hypervisor and a single guest vm. There is no physical NIC attached.

bhyve hostFreeBSD 14.2-RELEASE
bhyve guestRocky Linux 8.10

Code:
 ------------
| bhyve host |
 ------------
      |
 ----------
|vm-switch | -> no physical NIC attached
 ----------
      |
 ----------
| Guest VM |
 ----------

Code:
# bhyve host
$ iperf3 -s
# bhyve guest
$ iperf3 -c hypervisor0
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  2.80 GBytes  2.40 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  2.79 GBytes  2.40 Gbits/sec                  receiver

iperf Done.

Questions:

A) Is this already a good result for tap/bridge configuration?

B) Does netgraph provide a better result ?

Thank you for your help/advice/experience in advance.

Regards,

tanis
 
A) Is this already a good result for tap/bridge configuration?
I didn't manage to increase the bitrate, so for now, I consider that my best result.

Just for the record, configuration had been by the book, nothing special like mtu 9000 or sysctl stuff to boost performance.

B) Does netgraph provide a better result ?

Indeed, it does, see below:

bhyve vm guest Rocky Linuxbhyve hypervisor 14.2-RELEASE
Code:
$ iperf3 -c 172.16.254.1
Connecting to host 172.16.254.1, port 5201
[  5] local 172.16.254.151 port 38968 connected to 172.16.254.1 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  1.42 GBytes  12.2 Gbits/sec    7   1.38 MBytes      
[  5]   1.00-2.00   sec  1.44 GBytes  12.4 Gbits/sec    0   1.53 MBytes      
[  5]   2.00-3.00   sec   938 MBytes  7.87 Gbits/sec    0   1.53 MBytes      
[  5]   3.00-4.00   sec   782 MBytes  6.56 Gbits/sec    0   1.53 MBytes      
[  5]   4.00-5.00   sec  1.30 GBytes  11.2 Gbits/sec    3   1.53 MBytes      
[  5]   5.00-6.00   sec  1.63 GBytes  14.0 Gbits/sec    0   2.09 MBytes      
[  5]   6.00-7.00   sec  1.63 GBytes  14.0 Gbits/sec    0   2.12 MBytes      
[  5]   7.00-8.00   sec  1.63 GBytes  14.0 Gbits/sec    0   2.19 MBytes      
[  5]   8.00-9.00   sec  1.48 GBytes  12.7 Gbits/sec    0   2.96 MBytes      
[  5]   9.00-10.00  sec   949 MBytes  7.96 Gbits/sec    0   2.96 MBytes      
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  13.1 GBytes  11.3 Gbits/sec   10             sender
[  5]   0.00-10.00  sec  13.1 GBytes  11.3 Gbits/sec                  receiver

iperf Done.
$
Code:
$ iperf3 -s
-----------------------------------------------------------
Server listening on 5201 (test #1)
-----------------------------------------------------------
Accepted connection from 172.16.254.151, port 38954
[  5] local 172.16.254.1 port 5201 connected to 172.16.254.151 port 38968
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.03   sec  1.47 GBytes  12.2 Gbits/sec                
[  5]   1.03-2.00   sec  1.39 GBytes  12.3 Gbits/sec                
[  5]   2.00-3.00   sec   941 MBytes  7.87 Gbits/sec                
[  5]   3.00-4.00   sec   780 MBytes  6.56 Gbits/sec                
[  5]   4.00-5.01   sec  1.32 GBytes  11.2 Gbits/sec                
[  5]   5.01-6.06   sec  1.72 GBytes  14.1 Gbits/sec                
[  5]   6.06-7.00   sec  1.53 GBytes  14.0 Gbits/sec                
[  5]   7.00-8.00   sec  1.63 GBytes  14.0 Gbits/sec                
[  5]   8.00-9.01   sec  1.48 GBytes  12.7 Gbits/sec                
[  5]   9.01-10.00  sec   945 MBytes  7.96 Gbits/sec                
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec  13.1 GBytes  11.3 Gbits/sec                  receiver
Code:
$ iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 172.16.254.1, port 30789
[  5] local 172.16.254.151 port 5201 connected to 172.16.254.1 port 22941
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec   714 MBytes  5.98 Gbits/sec                
[  5]   1.00-2.00   sec   875 MBytes  7.34 Gbits/sec                
[  5]   2.00-3.00   sec   975 MBytes  8.18 Gbits/sec                
[  5]   3.00-4.00   sec  1011 MBytes  8.47 Gbits/sec                
[  5]   4.00-5.00   sec  1.02 GBytes  8.74 Gbits/sec                
[  5]   5.00-6.00   sec   860 MBytes  7.21 Gbits/sec                
[  5]   6.00-7.00   sec   859 MBytes  7.21 Gbits/sec                
[  5]   7.00-8.00   sec   899 MBytes  7.54 Gbits/sec                
[  5]   8.00-9.00   sec   862 MBytes  7.23 Gbits/sec                
[  5]   9.00-10.00  sec   833 MBytes  6.98 Gbits/sec                
[  5]  10.00-10.16  sec   131 MBytes  6.89 Gbits/sec                
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.16  sec  8.85 GBytes  7.48 Gbits/sec                  receiver
Code:
$ iperf3 -c 172.16.254.151
Connecting to host 172.16.254.151, port 5201
[  5] local 172.16.254.1 port 22941 connected to 172.16.254.151 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   859 MBytes  7.19 Gbits/sec  194    761 KBytes      
[  5]   1.00-2.00   sec   879 MBytes  7.37 Gbits/sec  219   70.0 KBytes      
[  5]   2.00-3.01   sec   997 MBytes  8.29 Gbits/sec  274   43.8 KBytes      
[  5]   3.01-4.00   sec  1008 MBytes  8.55 Gbits/sec  232    306 KBytes      
[  5]   4.00-5.00   sec  1.02 GBytes  8.71 Gbits/sec  223    332 KBytes      
[  5]   5.00-6.00   sec   832 MBytes  6.97 Gbits/sec  185    122 KBytes      
[  5]   6.00-7.06   sec   919 MBytes  7.27 Gbits/sec  261    262 KBytes      
[  5]   7.06-8.04   sec   881 MBytes  7.53 Gbits/sec  250   35.0 KBytes      
[  5]   8.04-9.01   sec   833 MBytes  7.24 Gbits/sec  244    114 KBytes      
[  5]   9.01-10.00  sec   819 MBytes  6.91 Gbits/sec  177   87.5 KBytes      
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  8.85 GBytes  7.60 Gbits/sec  2259            sender
[  5]   0.00-10.16  sec  8.85 GBytes  7.48 Gbits/sec                  receiver

iperf Done.

That's quite impressive compared to the tap/bridge approach, unfortunately further testing revealed the following issue:

bhyve vm guest Rocky Linuxbhyve hypervisor 14.2-RELEASE
Code:
$ iperf3 -c 172.16.254.1 -P 5
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  3.33 GBytes  2.86 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  3.32 GBytes  2.86 Gbits/sec                  receiver
[  7]   0.00-10.00  sec  3.34 GBytes  2.87 Gbits/sec    0             sender
[  7]   0.00-10.00  sec  3.33 GBytes  2.86 Gbits/sec                  receiver
[  9]   0.00-10.00  sec  3.32 GBytes  2.85 Gbits/sec    0             sender
[  9]   0.00-10.00  sec  3.32 GBytes  2.85 Gbits/sec                  receiver
[ 11]   0.00-10.00  sec  3.32 GBytes  2.85 Gbits/sec    0             sender
[ 11]   0.00-10.00  sec  3.32 GBytes  2.85 Gbits/sec                  receiver
[ 13]   0.00-10.00  sec  3.34 GBytes  2.87 Gbits/sec    0             sender
[ 13]   0.00-10.00  sec  3.33 GBytes  2.86 Gbits/sec                  receiver
[SUM]   0.00-10.00  sec  16.6 GBytes  14.3 Gbits/sec    0             sender
[SUM]   0.00-10.00  sec  16.6 GBytes  14.3 Gbits/sec                  receiver

iperf Done.
$
Code:
$ iperf3 -s
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec  3.32 GBytes  2.86 Gbits/sec                  receiver
[  8]   0.00-10.00  sec  3.33 GBytes  2.86 Gbits/sec                  receiver
[ 10]   0.00-10.00  sec  3.32 GBytes  2.85 Gbits/sec                  receiver
[ 12]   0.00-10.00  sec  3.32 GBytes  2.85 Gbits/sec                  receiver
[ 14]   0.00-10.00  sec  3.33 GBytes  2.86 Gbits/sec                  receiver
[SUM]   0.00-10.00  sec  16.6 GBytes  14.3 Gbits/sec                  receiver
Code:
$ iperf3 -s
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.11  sec   210 MBytes   174 Mbits/sec                  receiver
[  8]   0.00-10.11  sec   210 MBytes   174 Mbits/sec                  receiver
[ 10]   0.00-10.11  sec   210 MBytes   174 Mbits/sec                  receiver
[ 12]   0.00-10.11  sec   210 MBytes   174 Mbits/sec                  receiver
[ 14]   0.00-10.11  sec   210 MBytes   174 Mbits/sec                  receiver
[SUM]   0.00-10.11  sec  1.02 GBytes   869 Mbits/sec                  receiver
Code:
$ iperf3 -c 172.16.254.151 -P 5
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.02  sec   220 MBytes   185 Mbits/sec  297            sender
[  5]   0.00-10.11  sec   210 MBytes   174 Mbits/sec                  receiver
[  7]   0.00-10.02  sec   217 MBytes   182 Mbits/sec  243            sender
[  7]   0.00-10.11  sec   210 MBytes   174 Mbits/sec                  receiver
[  9]   0.00-10.02  sec   220 MBytes   184 Mbits/sec  195            sender
[  9]   0.00-10.11  sec   210 MBytes   174 Mbits/sec                  receiver
[ 11]   0.00-10.02  sec   222 MBytes   185 Mbits/sec  294            sender
[ 11]   0.00-10.11  sec   210 MBytes   174 Mbits/sec                  receiver
[ 13]   0.00-10.02  sec   220 MBytes   184 Mbits/sec  288            sender
[ 13]   0.00-10.11  sec   210 MBytes   174 Mbits/sec                  receiver
[SUM]   0.00-10.02  sec  1.07 GBytes   920 Mbits/sec  1317             sender
[SUM]   0.00-10.11  sec  1.02 GBytes   869 Mbits/sec                  receiver

iperf Done.

So far I haven't been able to figure out, why the performance drops when the hypervisor acts as the client. Perhaps someone here can point me in the right direction ?😅

Edit: Perhaps this belongs more in Networking then here!? :-/
 
Thx for the link. It seems that the low performance is caused by the overhead in emulating the actual hardware.

Netgraph seems to be the solution, if you do require services provided by the hypervisor. If the hypervisor has no business with the hosted vms then vale seems to be a veritable replacement. I'm testing right now SR-IOV, which seems to deliver the best performance out-of-the-box, so I guess I will settle with that, however I'm still wondering if I missed something.

FreeBSD is quite the work horse, documentation is extra-ordinary, and implemented features are very impressive. It's just sad that there seems to be no wider acceptance and therefor a lack of development and real experience judging by the response from the community. On the other hand, chances are high, I'm just asking dumb questions here. 😅 😆
 
So far I haven't been able to figure out, why the performance drops when the hypervisor acts as the client. Perhaps someone here can point me in the right direction ?😅
iperf3 is heavily cpu-bound and not fit for testing any 'higher-bandwidth' links due to that. You will only test how fast the host or VM can generate packages, not how much packages the (virtualized) interface can actually push.

If you must use iperf3, don't run it on any of the hosts/VMs you want to test for throughput, but only route the traffic generated by iperf *through* the host/VM you want to test - i.e. server and client need to be on one or two hosts distinct from the one you want to test.

But as you already discovered: SR-IOV is the way to go for 10G+ links, as this will provide the best (i.e. near bare-metal) performance.
However, I still stick to the default tap/bridge setup for pretty much all my jails for the sake of simplicity and only deviate from that for jails/VMs that actually "need" every last bit of available bandwidth...
 
To me, netgraph felt a bit tricky when I started out the first time:

Code:
# create ngeth0 interface
$ ngctl mkpeer . eiface hook ether

# list available netgraph interfaces
$ ngctl list
There are 2 total nodes:
  Name: ngeth0          Type: eiface          ID: 00000003   Num hooks: 0 # <--- ethernet interface we just created
  Name: ngctl235        Type: socket          ID: 00000004   Num hooks: 0

# create unnamed bridge interface and attach ngeth0 interface to it
$ ngctl mkpeer ngeth0: bridge ether link0

# list available netgraph interfaces
$ ngctl list
There are 3 total nodes:
  Name: ngeth0          Type: eiface          ID: 00000003   Num hooks: 1
  Name: <unnamed>       Type: bridge          ID: 00000007   Num hooks: 1  # <--- unnamed bridge interface we just created
  Name: ngctl5710       Type: socket          ID: 00000008   Num hooks: 0

# rename unnamed bridge interface to ngbr0
$ ngctl name ngeth0:ether ngbr0

# list available netgraph interfaces
$ ngctl list
There are 3 total nodes:
  Name: ngeth0          Type: eiface          ID: 00000003   Num hooks: 1
  Name: ngbr0           Type: bridge          ID: 00000007   Num hooks: 1  # <--- renamed bridge interface ngbr0, will only be visible by ngctl, not by using ifconfig
  Name: ngctl8395       Type: socket          ID: 0000000a   Num hooks: 0

# assign IP addr to ngeth0 interface
$ ifconfig ngeth0 inet 172.16.254.1/24

Now we have to configure bhyve to use ngbr0 as a switch, I prefer the vm-bhyve framework.

Code:
# create netgraph switch using vm-bhyve
$ vm switch create -t netgraph ngbr0

# display switch configuration
$ vm switch list
NAME    TYPE      IFACE                                ADDRESS  PRIVATE  MTU  VLAN  PORTS
ngbr0   netraph   netgraph,path=ngbr0:,peerhook=link2  n/a      n/a      n/a  n/a   n/a    # <--- our 1st vm will be attached with ngbr0:link2, 2nd vm ngbr0:link3, ...

# configure ngbr0 as switch for our vm named vmTest
$ vm configure vmTest
[...]
# default configuration
network0_type="virtio-net"
# configure ngbr0 to be used as switch
network0_switch="ngbr0"
[...]

# launch vmTest
$ vm start vmTest

So back to netgraph, our vmTest interface will appear as netgraph socket outside the vm, but inside the vm it will appear as ethernet interface.

Code:
# list available netgraph interfaces
$ ngctl list
There are 5 total nodes:
  Name: <unnamed>       Type: socket          ID: 00000011   Num hooks: 1  # <--- our 1st vm vmTest netgraph socket
  Name: ngeth0          Type: eiface          ID: 00000002   Num hooks: 1
  Name: ngbr0           Type: bridge          ID: 00000004   Num hooks: 3
 Name: ngctl91481      Type: socket          ID: 0000001b   Num hooks: 0

# list interfaces attached to ngbr0 interface
$ ngctl show ngbr0:
  Name: ngbr0           Type: bridge          ID: 00000004   Num hooks: 2
  Local hook      Peer name       Peer type    Peer ID         Peer hook 
  ----------      ---------       ---------    -------         --------- 
  link2           <unnamed>       socket       00000011        vmlink    # <--- we can here see our vm is attached by socket to our ngbr0 interface
  link0           ngeth0          eiface       00000002        ether     

# rename unnamed vmTest socket
$ ngctl name ngbr0:link2 vmTest

# list available netgraph interfaces
$ ngctl list
There are 5 total nodes:
  Name: vmTest       Type: socket          ID: 00000011   Num hooks: 1  # <--- our 1st vm vmTest netgraph socket
  Name: ngeth0          Type: eiface          ID: 00000002   Num hooks: 1
  Name: ngbr0           Type: bridge          ID: 00000004   Num hooks: 3
 Name: ngctl91481      Type: socket          ID: 0000001b   Num hooks: 0

# list interfaces attached to ngbr0 interface
$ ngctl show ngbr0:
  Name: ngbr0           Type: bridge          ID: 00000004   Num hooks: 2
  Local hook      Peer name       Peer type    Peer ID         Peer hook 
  ----------      ---------       ---------    -------         --------- 
  link2           vmTest       socket       00000011        vmlink    # <--- we can here see our vm is attached by socket to our ngbr0 interface
  link0           ngeth0          eiface       00000002        ether

Left to do is to assign an IP addr inside the vm vmTest like 172.16.254.100/24 and you are good to go.

Links which helped me to figure this all out:
https://people.freebsd.org/~julian/netgraph.html
man netgraph
man vm
Using Netgraph for FreeBSD's Bhyve Networking - Jun 15, 2022

If you are interested in vale as well, please see FreeBSD Forums: Bhyve HyperVisor Vale Networking Interface No Carrier .

I can recommend FreeBSD SR-IOV as well, which is what I'm using right now.

Have fun! :D

PS: I'm doing this all, using FreeBSD 14.2-RELEASE.
 
Last edited:
To me, netgraph felt a bit tricky when I started out the first time:

Code:
# create ngeth0 interface
$ ngctl mkpeer . eiface hook ether

# list available netgraph interfaces
$ ngctl list
There are 2 total nodes:
  Name: ngeth0          Type: eiface          ID: 00000003   Num hooks: 0 # <--- ethernet interface we just created
  Name: ngctl235        Type: socket          ID: 00000004   Num hooks: 0

# create unnamed bridge interface and attach ngeth0 interface to it
ngctl mkpeer ngeth0: bridge ether link0

# list available netgraph interfaces
$ ngctl list
There are 3 total nodes:
  Name: ngeth0          Type: eiface          ID: 00000003   Num hooks: 1
  Name: <unnamed>       Type: bridge          ID: 00000007   Num hooks: 1  # <--- unnamed bridge interface we just created
  Name: ngctl5710       Type: socket          ID: 00000008   Num hooks: 0

# rename unnamed bridge interface to ngbr0
ngctl name ngeth0:ether ngbr0

# list available netgraph interfaces
$ ngctl list
There are 3 total nodes:
  Name: ngeth0          Type: eiface          ID: 00000003   Num hooks: 1
  Name: ngbr0           Type: bridge          ID: 00000007   Num hooks: 1  # <--- renamed bridge interface ngbr0, will only be visible by ngctl, not by using ifconfig
  Name: ngctl8395       Type: socket          ID: 0000000a   Num hooks: 0

# assign IP addr to ngeth0 interface
$ ifconfig ngeth0 inet 172.16.254.1/24

Now we have to configure bhyve to use ngbr0 as a switch, I prefer the vm-bhyve framework.

Code:
# configure ngbr0 as switch for our vm named vmTest
$ vm configure vmTest
[...]
# default configuration
network0_type="virtio-net"
# configure ngbr0 to be used as switch
network0_switch="ngbr0"
[...]

# display switch configuration
$ vm switch list
NAME    TYPE      IFACE                                ADDRESS  PRIVATE  MTU  VLAN  PORTS
ngbr0   netraph   netgraph,path=ngbr0:,peerhook=link2  n/a      n/a      n/a  n/a   n/a    # <--- our 1st vm will be attached with ngbr0:link2, 2nd vm ngbr0:link3, ...

# launch vmTest
$ vm start vmTest

So back to netgraph, our vmTest interface will appear as netgraph socket outside the vm, but inside the vm it will appear as ethernet interface.

Code:
# list available netgraph interfaces
$ ngctl list
There are 5 total nodes:
  Name: <unnamed>       Type: socket          ID: 00000011   Num hooks: 1  # <--- our 1st vm vmTest netgraph socket
  Name: ngeth0          Type: eiface          ID: 00000002   Num hooks: 1
  Name: ngbr0           Type: bridge          ID: 00000004   Num hooks: 3
 Name: ngctl91481      Type: socket          ID: 0000001b   Num hooks: 0

# list interfaces attached to ngbr0 interface
$ ntctl show ngbr0:
  Name: ngbr0           Type: bridge          ID: 00000004   Num hooks: 2
  Local hook      Peer name       Peer type    Peer ID         Peer hook   
  ----------      ---------       ---------    -------         ---------   
  link2           <unnamed>       socket       00000011        vmlink    # <--- we can here see our vm is attached by socket to our ngbr0 interface
  link0           ngeth0          eiface       00000002        ether       

# rename unnamed vmTest socket
$ ngctl name ngbr0:link2 vmTest

# list available netgraph interfaces
$ ngctl list
There are 5 total nodes:
  Name: vmTest       Type: socket          ID: 00000011   Num hooks: 1  # <--- our 1st vm vmTest netgraph socket
  Name: ngeth0          Type: eiface          ID: 00000002   Num hooks: 1
  Name: ngbr0           Type: bridge          ID: 00000004   Num hooks: 3
 Name: ngctl91481      Type: socket          ID: 0000001b   Num hooks: 0

# list interfaces attached to ngbr0 interface
$ ntctl show ngbr0:
  Name: ngbr0           Type: bridge          ID: 00000004   Num hooks: 2
  Local hook      Peer name       Peer type    Peer ID         Peer hook   
  ----------      ---------       ---------    -------         ---------   
  link2           vmTest       socket       00000011        vmlink    # <--- we can here see our vm is attached by socket to our ngbr0 interface
  link0           ngeth0          eiface       00000002        ether

Left to do is to assign an IP addr inside the vm vmTest like 172.16.254.100/24 and you are good to go.

Links which helped me to figure this all out:
https://people.freebsd.org/~julian/netgraph.html
man netgraph
man vm
Using Netgraph for FreeBSD's Bhyve Networking - Jun 15, 2022

If you are interested in vale as well, please see FreeBSD Forums: Bhyve HyperVisor Vale Networking Interface No Carrier .

I can recommend FreeBSD SR-IOV as well, which is what I'm using right now.

Have fun! :D

PS: I'm doing this all, using FreeBSD 14.2-RELEASE.
Dear tanis:
thanks for your help . jedi of freebsd. i am new guy. now study it. thanks. :)
 
Back
Top