bhyve Best network layout in Bhyve?

(New to Bhyve)
Ok, what’s the best way to setup 10gbps vm networks?

I have tried some of them know but is a little confused.

I tried the ”old”? style and got more or less 10 gbps. (Something like.. don’t remember exactly right know):
/etc/rc.conf on the host:
Code:
cloned_interfaces="bridge0 tap0"
ifconfig_bridge0="addm ix0 addm tap0 up”

vm-file.conf
Code:
bridge=bridge0

tapdev=$(ifconfig tap create)
ifconfig $bridge addm $tapdev
…
ifconfig $bridge up
ifconfig $tapdev up
…
-s 10,virtio-net,$tapdev

I got 9+ gpbs here. But that will be complicated with 15 vlans and many vms.
Also I hade low disk speed, so I reinstalled the host and use vm-byhve today with:
/etc/rc.conf
Code:
ifconfig_ix0="up mtu 9000"
ifconfig_vlan20="inet 10.20.1.200 netmask 255.255.255.0 vlan 20 vlandev ix0 mtu 9000”
…
14 more of them as a have 15 VLANS

I ”installed” the networks in Bhyve with (agan, 14 more of them for very network):

vm switch create -t standard -i vlan20 -m 9000 -p net20


And it look like this:

NAME TYPE IFACE ADDRESS PRIVATE MTU VLAN PORTS
net10 standard vm-net10 - no - - bge0
net20 standard vm-net20 - no 9000 - vlan20
net30 standard vm-net30 - no 9000 - vlan30
net40 standard vm-net40 - no 9000 - vlan40
And so on…

net10 bge0 is a gigabit port for mgmt.
I don’t have any adress om them (-a address/prefix-len) and I don’t have any VLAN on them as I have that in /etc/rc.conf on the host (above).

In vm-file.conf
Code:
network0_type="virtio-net"
network0_switch=”net20"

In /etc/rc.conf on the vm:
Code:
ifconfig_vtnet0="inet 10.20.1.50 netmask 255.255.255.0 mtu 9000"
defaultrouter="10.20.1.5"

All routing between the VLANs etc. is made with two HA FreeBSD FW. The net work etc. I have ESXIs att full net-speed.


So.. the problem with this last style is that I only get around 4-5 Gbits/sec in iperf3 - network speed on the VMs in Bhyve (going to another vlan on a another host). The Bhyve host is delving full bandwidth and max out the 10gbps line, so that’s not a problem. Also, I got 9+gbps with the ”old” Bhyve install on the vms before.
The VMs (today) is using jumbo frames as it go above 3.4 gbps.

I’am thinking to just have ifconfig_ix0=”up mtu 9000 on the host /etc/rc.conf and then use vm-switch to handle the vlans etc. Will test this later today.


My question after all this, witch is the best network layout in a 2022 for Bhyve hosts?
 
Try to use vale instead of bridge(4)+tap(4). Or netgraph. I'm not sure if the vm-bhyve supports these backends, but they show the best results ( 14.6 GBits/sec between two FreeBSD 13.0 guest via vale/netmap + mtu 9000, on the same hoster ):

On the other hand, it is quite possible that this problem is specific to vlan.

perf.png
 
I ”installed” the networks in Bhyve with (agan, 14 more of them for very network):
vm switch create -t standard -i vlan20 -m 9000 -p net20
You should use vm switch create -n 20 -m 9000 -i ix0 -p net20

Only have two VLANs in use at the moment:
Code:
root@hosaka:~ # vm switch list
NAME     TYPE      IFACE       ADDRESS  PRIVATE  MTU   VLAN  PORTS
servers  standard  vm-servers  -        no       9000  11    lagg0
public   standard  vm-public   -        no       9000  10    lagg0
This will automatically create the lagg0.10 and lagg0.11 interfaces with the correct VLANs. And the correct bridges:
Code:
root@hosaka:~ # ifconfig lagg0.11
lagg0.11: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
        description: vm-vlan-servers-lagg0.11
        options=4000000<NOMAP>
        ether 00:25:90:f1:58:39
        groups: vlan vm-vlan viid-8bf4d@
        vlan: 11 vlanproto: 802.1q vlanpcp: 0 parent interface: lagg0
        media: Ethernet autoselect
        status: active
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
root@hosaka:~ # ifconfig vm-servers
vm-servers: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
        ether 0a:86:42:72:2e:e6
        id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
        maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
        root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
        member: tap11 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 22 priority 128 path cost 2000000
        member: tap10 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 21 priority 128 path cost 2000000
        member: tap9 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 20 priority 128 path cost 2000000
        member: tap4 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 15 priority 128 path cost 2000000
        member: tap3 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 14 priority 128 path cost 2000000
        member: tap2 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 13 priority 128 path cost 2000000
        member: tap1 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 12 priority 128 path cost 2000000
        member: tap0 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 11 priority 128 path cost 2000000
        member: lagg0.11 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 8 priority 128 path cost 2000000
        groups: bridge vm-switch viid-d5539@
        nd6 options=9<PERFORMNUD,IFDISABLED>

The only configuration I have in rc.conf is for the lagg0 interface itself:
Code:
cloned_interfaces="lagg0"
ifconfig_igb1="up mtu 9000 -rxcsum -rxcsum6 -txcsum -txcsum6 -lro -tso -vlanhwtso"
ifconfig_igb2="up mtu 9000 -rxcsum -rxcsum6 -txcsum -txcsum6 -lro -tso -vlanhwtso"
ifconfig_lagg0="laggproto lacp laggport igb1 laggport igb2"
 
Try to use vale instead of bridge(4)+tap(4).
I will check this vale. Did some fast search on it, and there is some info on it with Bhyve. I don't use bridge+tap in the new setup.
On the other hand, it is quite possible that this problem is specific to vlan.
Can be somewhere in the vm-switch or something. The host itself have full bandwidth.


You should use vm switch create -n 20 -m 9000 -i ix0 -p net20
If I do this, how do my vlans get in to the switch?
Do I do a vm switch vlan net20 20 after to set the vlans in wm switch?

Then remove everything in /etc/rc.conf and just do a simple ifconfig_ix0="up mtu 9000 …", and I will do a lagg instead as you have it.

This will automatically create ..
I didn’t get that part, but assuming after vm switch vlan net20 20.


I will grab another server and install Bhyve on it there I can do a lot of tests. This one is a learning-bhyve-server.
 
If I do this, how do my vlans get in to the switch?
The bridge is linked to the vlan(4) interface. That adds the VLAN headers when it exits the 'physical' interface.

<bridge> -> <vlan int> -> <interface>

So on the bridge you have untagged traffic, on the ix0 interface you get tagged traffic.

On my system I have a vm-server bridge that's connected to various VMs. Those VMs have untagged traffic. The vm-server bridge is connected to the lagg0.11 vlan(4) interface (this adds the tags). The lagg0 interface (LACP; igb1 and igb2) is trunked on my switch.

Code:
# show config interface Trk1

Startup configuration: 38

interface Trk1
   tagged vlan 10-11
   spanning-tree priority 4
   exit
(Switch is a HP Procurve; HP 2530-48G)
 
Ok.. Time for a update! Beware! Long one.. ;)

I made a lot of test’s.. and the problem is inside the Bhyve network setup (that I do).. but where??


Short version:
I have two Bhyve test hosts (and 9 ESXI hosts - using two of them in this test) witch have full network speed (9+ gpbs) to a FreeBSD and Ubuntu test VMs on the two ESXI host running iperf3. I only use one SFP+ connection in the test to have it as simple as possible.

But the problem is with Bhyve VMs. All the Bhyve VMs is running less than 6gpbs (same iperf3 test) with three different ifconfig in /etc/rc.conf and two different vm switch …. Se configurations below.

There is some problem I do with the network configuration in Bhyve hosts as the hosts have full bandwidth.

My installation process of the host is below.



Long version:
I installed a new test Bhyve(2) server as I don’t want to mess around with the first one.

All Bhyve hosts have full bandwidth, near 10gbps running iperf3 -c SERV_IP to a VM server running iperf3 -s on another VLAN on a ESXI host. I get same results to a FreeBSD 13p5 server (not Bhyve). I only use one SFP+ in the test right know. No lagg, no nothing. As simple as possible to make this done.

I get difference results here.
If I make IPs and VLANs in /etc/rc.conf as I mention above (also below) I get between 4-5gbps in both directions on every VM on the host to a test iperf3 VM on a ESXI.

But, if I just set a ifconfig_ix0="up mtu 9000” in /etc/rc.conf on the Bhyve host and vm switch create -n 20 -m 9000 -i ix0 -p net20 I get less than 4gbps in one direction and only 0.5 in the other direction to another Bhyve VM.


Here is some results. In this test I have two ESXI test hosts and two Bhyve test hosts on the same network and using two test VLANs. All the hosts and VMs is on the same network. All networks and vlans is handled by two FreeBSD 13p5 HA firewalls.

There is more testa and more results.. but.. yeah..

Esxi1 host@vlan1 <-> Esxi2 host@vlan2 = 9+gbps
Esxi1 host@vlan1 <-> Esxi2 host@vlan2 = 9+gbps
Bhyve1 host@vlan1 <-> Esxi1 and 2 FBSD-VM@vlan2 = 9+gbps
Bhyve2 host@vlan1 <-> Esxi1 and 2 FBSD-VM@vlan2 = 9+gbps
(All good here, full bandwidth. So the network is working and the routing firewalls)


Esxi1 FBSD-vm@vlan1 <-> Esxi2 FBSD-vm@vlan2 = 9+gbps
Esxi1 FBSD-vm@vlan1 <-> Esxi2 FBSD-vm@vlan2 = 9+gbps


Bhyve1 FBSD-vm@vlan1 -> Esxi1 and 2 FBSD-vm@vlan2 = ~5gbps
Bhyve1 FBSD-vm@vlan1 <- Esxi1 and 2 FBSD-vm@vlan2 = ~4.8gbps
(ifconfig = IPs and VLANS as mention above)


Bhyve2 FBSD-vm@vlan1 -> Esxi1 and 2 FBSD-vm@vlan2 = ~3.7gpbs (unstable 1-6gbps)
Bhyve2 FBSD-vm@vlan1 <- Esxi1 and 2 FBSD-vm@vlan2 = ~4gbps
(ifconfig = IPs and VLANS as mention above)


Bhyve2 FBSD-vm@vlan1 -> Esxi1 and 2 FBSD-vm@vlan2 = ~1.8gpbs
Bhyve2 FBSD-vm@vlan1 <- Esxi1 and 2 FBSD-vm@vlan2 = ~3.3gbps
(ifconfig = Ifconfig = up as mention above)


Bhyve2 FBSD-vm@vlan2 -> Bhyve1 FBSD-vm@vlan2 = ~1.4gbps
Bhyve2 FBSD-vm@vlan2 <- Bhyve1 FBSD-vm@vlan2 = ~1.4gbps
(Bhyve1 = Ifconfig = IPs and VLANS as mention above)
(Bhyve2 = Ifconfig = up as mention above)


Bhyve1 FBSD-vm@vlan1 <-> Bhyve1 FBSD-vm@vlan1 = ~4.8gbps
(Bhyve1 = Ifconfig = IPs and VLANS as mention above)
(Bhyve2 = Ifconfig = up as mention above)


Bhyve2 FBSD-vm@vlan1 <-> Bhyve2 FBSD-vm@vlan1 = Not working - no connection
(Ifconfig = up as mention above)


Bhyve2 host @vlan2 -> Bhyve1 host @vlan2 = ~1.4gbps
Bhyve2 host @vlan2 <- Bhyve1 host @vlan2 = ~3gbps
(Bhyve1 = Ifconfig = IPs and VLANS as mention above)
(Bhyve2 = Ifconfig = IPs and VLANS as mention above)


All FreeBSD VMs is installed the same and they are using the same internal DNS’s servers and NTP servers. So everything is more or less the same on every host.

All host (ESXI and Bhyve) have full network throughput in every direction, exept Bhyve1 <-> Bhyve2. All VMs on ESXI have full throughput. Bhyve VMs is depending on ifconfig command in /etc/rc.conf and never extend ~6-7gbps.


On Bhyve1, same VM is running iperf3 to another FreeBSD VM on the same host - Bhyve1 are working. This host is using my old ifconfig with IPs and VLANs in /etc/rc.conf = FW is routing all traffic.

On Bhyve2, a FreeBSD VM cannot make any connection to another VM on the same host - Bhyve2 using ifconfig_ix0="up mtu 9000” in /etc/rc.conf and vm switch create -n 20 -m 9000 -i ix0 -p net20. There is no route at all. Maybe a RDR in PF?

I have tried to enabled and disabled FP on both the hosts and VMs, no deferent in the speed test.


Hardware
All (tot 9) ESXI is HPE G9, some have HPE 2-port 10gbps SFP+ 530FLR-SFP, other HPE 2-port 10gbps SFP+ 560FLR and all servers aslo have HPE NC523SFP NIC, all working in full speed.

Bhyve1 is a HPE G9 with a HPE 2-port 10gbps SFP+ 560FLR card.

Bhyve2 is a HPE G8 with HPE a NC523SFP card.

FreeBSD server - backup (non Bhyve) is a Supermicro have two HPE NC523SFP NICs (same as in Bhyve2) in laggs, all working in full speed.

FW- two HPE G9 in HA with 10G Mellanox Emulex Corporation (can’t remember the model) and HPE NC523SFP NICs.

Everything is using jumbo frames.
Both Bhyve 1 and 2 did run ESXI before without problem, so the hardware is working.

It’s something in the local host network configuration in the Bhyve servers.

Does anyone have an idea about the problem?




BYHVE 1
Installation process of the hosts (I change ips, vlans and nics as it’s easer to read/see)


Code:
# freebsd-update fetch install
# pkg update && pkg upgrade
# pkg install htop nano iftop tcpdump zabbix5-agent wget


# vi /etc/ssh/sshd_config   (harden)


# vi /boot/loader.conf

### CPU AESNI
 aesni_load="YES"
 geom_eli_load="YES"

### Lagg
#if_lagg_load=”YES”


# vi /etc/rc.conf

 ifconfig_bge0="inet 10.10.10.20 netmask 255.255.255.0"
 defaultrouter=”10.10.10.5”

 cloned_interfaces="vlan10 vlan20
 ifconfig_ix0="up mtu 9000"
 ifconfig_vlan10="inet 10.10.1.20 netmask 255.255.255.0 vlan 10  vlandev ix0 mtu 9000"
 ifconfig_vlan20="inet 10.20.1.20 netmask 255.255.255.0 vlan 20  vlandev ix0 mtu 9000"

 pf_enable="YES"
 pf_rules="/etc/pf.conf"
 pflog_enable="YES"
 pflog_logfile="/var/log/pflog"

 kld_list="aesni coretemp vmm"
 vm_enable="YES"
 vm_dir=”zfs:zroot/vm"

 zabbix_agentd_enable=”yes"

 ntpdate_enable="YES"
 ntpd_enable=”YES"


Installation process of Bhyve on the hosts
Code:
# pkg install vm-bhyve uefi-edk2-bhyve uefi-edk2-bhyve-csm grub2-bhyve

# zfs create zroot/vm
# zfs set mountpoint=/vm zroot/vm

# vm switch create -t standard -i vlan10 -m 9000 -p net10
# vm switch create -t standard -i vlan20 -m 9000 -p net20


(vm config file)
Code:
# cat /vm/freebsdtest1/freebsdtest1.conf
freebsdtest1.conf
loader="bhyveload"
cpu=2
memory=8G
network0_type="virtio-net"
network0_switch="net10"
disk0_type="virtio-blk"
disk0_name="disk0.img"



BYHVE 2
All the same except

Code:
# vi /etc/rc.conf

 ifconfig_bge0="inet 10.10.10.30 netmask 255.255.255.0"
 defaultrouter=”10.10.10.5”

 ifconfig_ql1 ="up mtu 9000"[/cmd]

[B]Installation process of Bhyve on the hosts[/B]
[cmd]# vm switch create -n 10 -m 9000 -i ql1 -p net10
# vm switch create -n 20 -m 9000 -i ql1 -p net20



Bhyve1 with IPs and VLANS in /etc/rc.conf
Code:
#  vm switch list
NAME    TYPE      IFACE      ADDRESS  PRIVATE  MTU   VLAN  PORTS
mgmt   standard  vm-mgmt   -        no       -     -     bge0
net10   standard  vm-net10   -        no       9000  -     vlan10
net20   standard  vm-net20   -        no       9000  -     vlan20
…

Code:
# ifconfig tap1
tap1: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
    description: freebsdtest1
    options=80000<LINKSTATE>
    ether 58:9c:fc:10:ff:ce
    groups: tap vm-port
    media: Ethernet autoselect
    status: active
    nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
    Opened by PID 12127

Code:
# ifconfig vm-net10
vm-net10: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
    ether 42:d4:94:07:b6:b9
    id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
    maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
    root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
    member: tap3 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
            ifmaxaddr 0 port 35 priority 128 path cost 2000000
    member: tap2 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
            ifmaxaddr 0 port 34 priority 128 path cost 2000000
    member: vlan10 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
            ifmaxaddr 0 port 9 priority 128 path cost 2000
    groups: bridge vm-switch viid-1470f@
    nd6 options=9<PERFORMNUD,IFDISABLED>

Code:
# ifconfig vlan10
vlan10: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
    options=200401<RXCSUM,LRO,RXCSUM_IPV6>
    ether 38:ea:a7:16:66:90
    inet 10.10.1.210 netmask 0xffffff00 broadcast 10.10.1.255
    groups: vlan
    vlan: 10 vlanproto: 802.1q vlanpcp: 0 parent interface: ix0
    media: Ethernet autoselect (10Gbase-Twinax <full-duplex,rxpause,txpause>)
    status: active
    nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>



Bhyve2 with just UP in /etc/rc.conf

Code:
#  vm switch list
NAME   TYPE      IFACE     ADDRESS  PRIVATE  MTU   VLAN  PORTS
net10  standard  vm-net10  -        yes      9000  10    ql1
net20  standard  vm-net20  -        yes      9000  20    ql1

Code:
# ifconfig tap0
tap0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
    description: vmnet-freebsdtest-0-net10
    options=80000<LINKSTATE>
    ether 58:9c:fc:10:ff:e2
    groups: tap vm-port
    media: Ethernet autoselect
    status: active
    nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
    Opened by PID 57070

Code:
# ifconfig vm-net10
vm-net10: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
    ether ca:53:a3:1e:1d:17
    id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
    maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
    root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
    member: ql1.10 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
            ifmaxaddr 0 port 9 priority 128 path cost 2000
    groups: bridge vm-switch viid-1470f@
    nd6 options=9<PERFORMNUD,IFDISABLED>

Code:
# ifconfig ql1.10
ql1.10: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
    description: vm-vlan-net10-ql1.10
    options=80000<LINKSTATE>
    ether 24:be:05:ef:85:9c
    groups: vlan vm-vlan viid-8f8db@
    vlan: 10 vlanproto: 802.1q vlanpcp: 0 parent interface: ql1
    media: Ethernet autoselect (10Gbase-SR <full-duplex>)
    status: active
    nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>


VMs ifconfig look like this:
Code:
# ifconfig vtnet0
vtnet0: flags=8863<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
        options=80028<VLAN_MTU,JUMBO_MTU,LINKSTATE>
        ether 58:9c:fc:06:3e:0c
        inet 10.10.1.160 netmask 0xffffff00 broadcast 10.10.1.255
        media: Ethernet autoselect (10Gbase-T <full-duplex>)
        status: active
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>


I’am doning some wrong things with vm-byhve setup and switch.

Any clue anyone?
 
But, if I just set a ifconfig_ix0="up mtu 9000” in /etc/rc.conf on the Bhyve host and vm switch create -n 20 -m 9000 -i ix0 -p net20 I get less than 4gbps in one direction and only 0.5 in the other direction to another Bhyve VM.
Don't know if it's going to help or not but try to disable things like TSO and LRO on the ix0 interface. Turn off everything 'automatic'.
Code:
ifconfig_ix0="up mtu 9000 -rxcsum -rxcsum6 -txcsum -txcsum6 -lro -tso -vlanhwtso"
 
I did that test before and didn’t remember the result, so I did a new test.

I rebooted the server so nothing ”old” stuff is there, but it’s the same results:

Bhyve2 FBSD-vm@vlan1 -> Esxi1 and 2 FBSD-vm@vlan2 = ~1.8gpbs
Bhyve2 FBSD-vm@vlan1 <- Esxi1 and 2 FBSD-vm@vlan2 = ~3.1gbps

Code:
root@test1:/home/ami # iperf3 -c 10.20.1.18
Connecting to host 10.20.1.18 , port 5201
[  5] local 10.10.1.67 port 51477 connected to 10.20.1.18 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   153 MBytes  1.28 Gbits/sec    0   3.01 MBytes    
[  5]   1.00-2.00   sec   150 MBytes  1.26 Gbits/sec    0   3.01 MBytes    
[  5]   2.00-3.00   sec   162 MBytes  1.36 Gbits/sec    0   3.01 MBytes    
[  5]   3.00-4.00   sec   489 MBytes  4.10 Gbits/sec    0   3.01 MBytes    
[  5]   4.00-5.01   sec   382 MBytes  3.17 Gbits/sec   73    995 KBytes    
[  5]   5.01-6.00   sec   147 MBytes  1.25 Gbits/sec    0   1.86 MBytes    
[  5]   6.00-7.00   sec   152 MBytes  1.28 Gbits/sec    0   2.46 MBytes    
[  5]   7.00-8.00   sec   173 MBytes  1.45 Gbits/sec    0   3.01 MBytes    
[  5]   8.00-9.00   sec   176 MBytes  1.48 Gbits/sec    0   3.01 MBytes    
[  5]   9.00-10.00  sec   172 MBytes  1.45 Gbits/sec    0   3.01 MBytes    
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  2.11 GBytes  1.81 Gbits/sec   73             sender
[  5]   0.00-10.35  sec  2.11 GBytes  1.75 Gbits/sec                  receiver

iperf Done.
root@test1:/home/ami # iperf3 -c 10.20.1.18 -R
Connecting to host 10.20.1.18, port 5201
Reverse mode, remote host 10.20.1.18 is sending
[  5] local 10.10.1.67 port 28316 connected to 10.20.1.18 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec   350 MBytes  2.94 Gbits/sec               
[  5]   1.00-2.00   sec   470 MBytes  3.94 Gbits/sec               
[  5]   2.00-3.00   sec   358 MBytes  3.00 Gbits/sec               
[  5]   3.00-4.00   sec   380 MBytes  3.19 Gbits/sec               
[  5]   4.00-5.00   sec   361 MBytes  3.03 Gbits/sec               
[  5]   5.00-6.00   sec   357 MBytes  2.99 Gbits/sec               
[  5]   6.00-7.00   sec   357 MBytes  3.00 Gbits/sec               
[  5]   7.00-8.00   sec   358 MBytes  3.00 Gbits/sec               
[  5]   8.00-9.00   sec   357 MBytes  3.00 Gbits/sec               
[  5]   9.00-10.00  sec   357 MBytes  2.99 Gbits/sec               
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.33  sec  3.62 GBytes  3.01 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  3.62 GBytes  3.11 Gbits/sec                  receiver



I just did a test from a production FreeBSD backup server (non Bhyve) that’s doning a loot of stuff all the time (4x10gbps) to the same VM FreeBSD iperf3 on ESXI and to the VM on Bhyve2. It’s sitting on another vlan, but I have tried this vlans as-well in my tests above on the Bhyve servers.

To FreeBSD-vm @ ESXI
Code:
root@storage01:/home/ami # iperf3 -c 10.20.1.18
Connecting to host 10.20.1.18, port 5201
[  5] local 10.35.8.150 port 13659 connected to 10.20.1.18 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   906 MBytes  7.60 Gbits/sec    0   3.00 MBytes     
[  5]   1.00-2.00   sec  1.04 GBytes  8.92 Gbits/sec    0   3.00 MBytes     
[  5]   2.00-3.00   sec  1.03 GBytes  8.88 Gbits/sec    0   3.00 MBytes     
[  5]   3.00-4.00   sec  1.03 GBytes  8.84 Gbits/sec  258   1.36 MBytes     
[  5]   4.00-5.00   sec  1.03 GBytes  8.89 Gbits/sec   80   3.00 MBytes     
[  5]   5.00-6.00   sec  1.04 GBytes  8.90 Gbits/sec   53   3.00 MBytes     
[  5]   6.00-7.00   sec  1.03 GBytes  8.86 Gbits/sec  180   1.96 MBytes     
[  5]   7.00-8.00   sec  1.03 GBytes  8.87 Gbits/sec  113   2.05 MBytes     
[  5]   8.00-9.00   sec  1.03 GBytes  8.86 Gbits/sec  193   1.72 MBytes     
[  5]   9.00-10.00  sec  1.03 GBytes  8.86 Gbits/sec  105   1.29 MBytes     
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  10.2 GBytes  8.75 Gbits/sec  982             sender
[  5]   0.00-10.21  sec  10.2 GBytes  8.56 Gbits/sec                  receiver


To FreeBSD-vm @ Bhyve2
Code:
root@storage01:/home/ami # iperf3 -c 10.10.1.67
Connecting to host 10.10.1.67, port 5201
[  5] local 10.35.8.150 port 19574 connected to 10.10.1.67 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   346 MBytes  2.90 Gbits/sec    0   1.76 MBytes     
[  5]   1.00-2.00   sec   343 MBytes  2.88 Gbits/sec    0   1.76 MBytes     
[  5]   2.00-3.00   sec   394 MBytes  3.31 Gbits/sec    0   1.76 MBytes     
[  5]   3.00-4.00   sec   417 MBytes  3.50 Gbits/sec    0   1.76 MBytes     
[  5]   4.00-5.00   sec   359 MBytes  3.01 Gbits/sec    0   1.76 MBytes     
[  5]   5.00-6.00   sec   369 MBytes  3.10 Gbits/sec    0   1.76 MBytes     
[  5]   6.00-7.00   sec   364 MBytes  3.05 Gbits/sec    8   1.16 MBytes     
[  5]   7.00-8.00   sec   373 MBytes  3.13 Gbits/sec    0   1.75 MBytes     
[  5]   8.00-9.00   sec   354 MBytes  2.97 Gbits/sec    0   1.75 MBytes     
[  5]   9.00-10.00  sec   355 MBytes  2.98 Gbits/sec    0   1.75 MBytes     
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  3.59 GBytes  3.08 Gbits/sec    8             sender
[  5]   0.00-10.00  sec  3.59 GBytes  3.08 Gbits/sec                  receiver
In reverse # iperf3 -c 10.10.1.67 -R is about 1.6gbps. So no different.
 
Can you share the ifconfig ix0 output on the bhyve host physical interface not the vlan10. I want to see if the vlanhwtag, vlanhwcsum and vlanhwtso are enabled.
 
Oh! I did not share that one!

Here is:

Bhyve2 running latest ifconfig_ql1="up mtu 9000 -rxcsum -rxcsum6 -txcsum -txcsum6 -lro -tso -vlanhwtso" in /etc/rc.conf
Code:
ql1: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
    options=80038<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,LINKSTATE>
    ether 24:be:05:ef:85:9c
    media: Ethernet autoselect (10Gbase-SR <full-duplex>)
    status: active
    nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>

Bhyve1 running my IP and VLAN config in /etc/rc.conf
Code:
ix0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
    options=e407bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
    ether 38:ea:a7:16:66:90
    media: Ethernet autoselect (10Gbase-Twinax <full-duplex,rxpause,txpause>)
    status: active
    nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
 
On my system I have a vm-server bridge that's connected to various VMs. Those VMs have untagged traffic. The vm-server bridge is connected to the lagg0.11 vlan(4) interface (this adds the tags). The lagg0 interface (LACP; igb1 and igb2) is trunked on my switch.

Sorry if I slightly hijack the thread, but how you made management vlan in your scenario?
 
The host this runs on has 4 ethernet ports; igb0 to igb3. I've set igb0 as the host's management interface. So that has an IP address I can connect on for starting/stopping VMs and do maintenance of the host itself. It's connected, untagged, to my (physical) switch, switch port is configured for my 'management' vlan. A lagg(4) LACP with igb1 and igb2 is used as the 'uplink' interface for the vm-bhyve switches. On the (physical) switch these two ports are bundled (LACP) and tagged with the vlans.
 
Long time ago...
This bhyve has been lying on the table for quiet some time now. I did some more tests, but no luck to get 10 gbps network within VM.

Is there any one that have a working vm-bhyve install with VMs that run at full 10gbps that can share the install process please?

I tried other 10 gigs cards, but same result. Then i just used the internal 1 gbps nick, and same result! I didn’t get 1 gbps with iperf3, just around 0.4 gbps (same as the 10 gbps cards – a 2/3 bandwidth drop om all VMs). Tried with and without VLANs, simple basic installs, tried on 7 different HPE servers (G9 and tried G8 as well, DL360 and DL380 – all heavy loaded) is’t something with HPE servers? Then I took a old Dell 720 (quiet loaded with 40C and 256gb same 10gbps card, but this server have spin disk), same result.

Here is my latest install process:
Code:
### PreInstall & Update ###
===========================

# freebsd-update fetch install
# pkg update && pkg upgrade
# pkg install htop nano iftop tcpdump wget vm-bhyve edk2-bhyve uefi-edk2-bhyve-csm grub2-bhyve bhyve-firmware bhyve-rc zabbix5-agent


### Configs ###
===============

# rm /etc/ssh/sshd_config
# nano /etc/ssh/sshd_config
---------------------------
Port 22
Protocol 2
#AddressFamily any
#ListenAddress 0.0.0.0
#ListenAddress ::
LogLevel VERBOSE
HostKey /etc/ssh/ssh_host_ed25519_key
#ciphers
#macs
#kexalgorithms
PermitRootLogin no
PermitEmptyPasswords no
#RSAAuthentication yes
PubkeyAuthentication yes
AuthorizedKeysFile .ssh/authorized_keys
PasswordAuthentication yes
#UsePrivilegeSeparation yes
ChallengeResponseAuthentication no
MaxAuthTries 4
LoginGraceTime 25s
#PrintLastLog yes
IgnoreRhosts yes
#RhostsRSAAuthentication no
HostbasedAuthentication no
#StrictModes yes
#ClientAliveInterval 300
#ClientAliveCountMax 2
Banner /etc/issue.txt
#Subsystem       sftp    internal-sftp -f AUTH -l VERBOSE
#Subsystem sftp /usr/lib/openssh/sftp-server
X11Forwarding no
AllowTcpForwarding yes
AllowStreamLocalForwarding no
GatewayPorts no
PermitTunnel no



# nano /boot/loader.conf
------------------------
### 10 gbps OCE Network Driver
# if_ix_updated_load="YES"
# if_oce_load="YES"
 if_qlxgb_load="YES"

### CPU AESNI
 aesni_load="YES"
 geom_eli_load="YES"

### Lagg - not configured on the test machine
#if_lagg_load="YES"

### 1024x768
 hw.vga.textmode=1
 kern.vty=vt
 kern.vt.fb.default_mode="1024x768"



# nano /etc/rc.conf
-------------------

### Network MGMT
 ifconfig_bge0="inet 10.20.30.150 netmask 255.255.255.0"
 defaultrouter="10.20.30.1"

### Internal 10gbps Server Network
 ifconfig_ql0="up mtu 9000 -rxcsum -txcsum -tso -lro"

### Bhyve configuration
 kld_list="aesni coretemp vmm"
 vm_enable="YES"
 vm_dir="zfs:zroot/vm"

### pf
 pf_enable="YES"
 pf_rules="/etc/pf.conf"
 pflog_enable="YES"
 pflog_logfile="/var/log/pflog"

### Zabbix - not configured on the test machine
# zabbix_agentd_enable="yes"

### NTP server
 ntpdate_enable="YES"
 ntpd_enable="YES"



# nano /etc/ntp.conf
--------------------
server 10.30.30.20
server 10.30.30.21


# nano /etc/pf.conf
-------------------
pass all    # only on the test machine


# nano /etc/sysctl.conf - not configured in new setup
-----------------------------------------------------
#net.link.tap.up_on_open=1
#net.link.ether.inet.proxyall=1
#net.inet.ip.random_id =1
#net.inet.ip.forwarding=1



### Configure Bhyve ###
=======================

# zfs create zroot/vm
# zfs set mountpoint=/vm zroot/vm
# reboot

# vm init
# cp /usr/local/share/examples/vm-bhyve/default.conf /vm/.templates/
# cp /vm/.templates/default.conf /vm/.templates/freebsd.conf

# vm iso https://download.freebsd.org/releases/amd64/amd64/ISO-IMAGES/13.1/FreeBSD-13.1-RELEASE-amd64-dvd1.iso
# vm switch create -i bge0 Internal
# vm switch create -m 9000 -i ql0 10gig




### Installing FreeBSD VM ###
=============================

# cp /vm/.templates/freebsd.conf /vm/.templates/test1.conf

# nano /vm/.templates/test1.conf     (same on test2.conf)
----------------------------------
loader="bhyveload"
cpu=4
memory=8192M
network0_type="virtio-net"
network0_switch="Internal"
disk0_type="virtio-blk"
disk0_name="disk0.img"

-= OR (for 10 gig nic) =-

loader="bhyveload"
cpu=4
memory=8192M
network0_type="virtio-net"
network0_switch="10gig"
disk0_type="virtio-blk"
disk0_name="disk0.img"


# cd /vm
# vm create -t test1 -s 20G test1
# vm install -f test1 FreeBSD-13.1-RELEASE-amd64-dvd1.iso
[/cmd]


And on the FreeBSD VMs (test1 and test2). All running as simple as it could.
[cmd]
### Install and Configure on the FreeBSD VM ###
===============================================

# freebsd-update fetch install
# pkg update && pkg upgrade
# pkg install nano iperf3

# nano /etc/rc.conf
-------------------
ifconfig_vtnet0="DHCP mtu 9000 -rxcsum -txcsum -tso -lro"

On this setup I get 600 mbps on the 1gbps port and 3.2 gbps on the 10 gpbs port. New install today on a new server (number 9).

Anyone that have 10 gbps on the VMs and have a config to share?

I did check Vale, but that’s outdated and not really supported.
I really want to have this bhyve working so I can dump VMWare.
 
Anyone that have 10 gbps on the VMs and have a config to share?

I'm interested in that as well, and at least, if it's possible, how to create virtual switch&tap interfaces with 10G support?

Whenever I start my vm instance, my host system creates tap0 with "Ethernet 1000baseT":

tap0: flags=1008943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST,LOWER_UP> metric 0 mtu 1500
description: vmnet/bsd/0/vmswitch
options=80000<LINKSTATE>
ether 58:9c:fc:10:e1:5f
groups: tap vm-port
media: Ethernet 1000baseT <full-duplex>
status: active
nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
Opened by PID 3464


Is it possible to set that virtual interface to 10G speed?
 
Back
Top