Ok.. Time for a update! Beware! Long one..
I made a lot of test’s.. and the problem is inside the Bhyve network setup (that I do).. but where??
Short version:
I have two Bhyve test hosts (and 9 ESXI hosts - using two of them in this test) witch have full network speed (9+ gpbs) to a FreeBSD and Ubuntu test VMs on the two ESXI host running iperf3. I only use one SFP+ connection in the test to have it as simple as possible.
But the problem is with Bhyve VMs. All the Bhyve VMs is running less than 6gpbs (same iperf3 test) with three different
ifconfig
in
/etc/rc.conf and two different
vm switch …
. Se configurations below.
There is some problem I do with the network configuration in Bhyve hosts as the hosts have full bandwidth.
My installation process of the host is below.
Long version:
I installed a new test Bhyve(2) server as I don’t want to mess around with the first one.
All Bhyve hosts have full bandwidth, near 10gbps running
iperf3 -c SERV_IP
to a VM server running
iperf3 -s
on another VLAN on a ESXI host. I get same results to a FreeBSD 13p5 server (not Bhyve). I only use one SFP+ in the test right know. No lagg, no nothing. As simple as possible to make this done.
I get difference results here.
If I make IPs and VLANs in
/etc/rc.conf as I mention above (also below) I get between 4-5gbps in both directions on every VM on the host to a test iperf3 VM on a ESXI.
But, if I just set a
ifconfig_ix0="up mtu 9000”
in
/etc/rc.conf on the Bhyve host and
vm switch create -n 20 -m 9000 -i ix0 -p net20
I get less than 4gbps in one direction and only 0.5 in the other direction to another Bhyve VM.
Here is some results. In this test I have two ESXI test hosts and two Bhyve test hosts on the same network and using two test VLANs. All the hosts and VMs is on the same network. All networks and vlans is handled by two FreeBSD 13p5 HA firewalls.
There is more testa and more results.. but.. yeah..
Esxi1 host@vlan1 <-> Esxi2 host@vlan2 = 9+gbps
Esxi1 host@vlan1 <-> Esxi2 host@vlan2 = 9+gbps
Bhyve1 host@vlan1 <-> Esxi1 and 2 FBSD-VM@vlan2 = 9+gbps
Bhyve2 host@vlan1 <-> Esxi1 and 2 FBSD-VM@vlan2 = 9+gbps
(All good here, full bandwidth. So the network is working and the routing firewalls)
Esxi1 FBSD-vm@vlan1 <-> Esxi2 FBSD-vm@vlan2 = 9+gbps
Esxi1 FBSD-vm@vlan1 <-> Esxi2 FBSD-vm@vlan2 = 9+gbps
Bhyve1 FBSD-vm@vlan1 -> Esxi1 and 2 FBSD-vm@vlan2 = ~5gbps
Bhyve1 FBSD-vm@vlan1 <- Esxi1 and 2 FBSD-vm@vlan2 = ~4.8gbps
(ifconfig = IPs and VLANS as mention above)
Bhyve2 FBSD-vm@vlan1 -> Esxi1 and 2 FBSD-vm@vlan2 = ~3.7gpbs (unstable 1-6gbps)
Bhyve2 FBSD-vm@vlan1 <- Esxi1 and 2 FBSD-vm@vlan2 = ~4gbps
(ifconfig = IPs and VLANS as mention above)
Bhyve2 FBSD-vm@vlan1 -> Esxi1 and 2 FBSD-vm@vlan2 = ~1.8gpbs
Bhyve2 FBSD-vm@vlan1 <- Esxi1 and 2 FBSD-vm@vlan2 = ~3.3gbps
(ifconfig = Ifconfig = up as mention above)
Bhyve2 FBSD-vm@vlan2 -> Bhyve1 FBSD-vm@vlan2 = ~1.4gbps
Bhyve2 FBSD-vm@vlan2 <- Bhyve1 FBSD-vm@vlan2 = ~1.4gbps
(Bhyve1 = Ifconfig = IPs and VLANS as mention above)
(Bhyve2 = Ifconfig = up as mention above)
Bhyve1 FBSD-vm@vlan1 <-> Bhyve1 FBSD-vm@vlan1 = ~4.8gbps
(Bhyve1 = Ifconfig = IPs and VLANS as mention above)
(Bhyve2 = Ifconfig = up as mention above)
Bhyve2 FBSD-vm@vlan1 <-> Bhyve2 FBSD-vm@vlan1 = Not working - no connection
(Ifconfig = up as mention above)
Bhyve2 host @vlan2 -> Bhyve1 host @vlan2 = ~1.4gbps
Bhyve2 host @vlan2 <- Bhyve1 host @vlan2 = ~3gbps
(Bhyve1 = Ifconfig = IPs and VLANS as mention above)
(Bhyve2 = Ifconfig = IPs and VLANS as mention above)
All FreeBSD VMs is installed the same and they are using the same internal DNS’s servers and NTP servers. So everything is more or less the same on every host.
All host (ESXI and Bhyve) have full network throughput in every direction,
exept Bhyve1 <-> Bhyve2. All VMs on ESXI have full throughput. Bhyve VMs is depending on
ifconfig command
in
/etc/rc.conf and never extend ~6-7gbps.
On Bhyve1, same VM is running iperf3 to another FreeBSD VM on the same
host - Bhyve1 are working. This host is using my old ifconfig with IPs and VLANs in
/etc/rc.conf = FW is routing all traffic.
On Bhyve2, a FreeBSD VM cannot make any connection to another VM on the same
host - Bhyve2 using
ifconfig_ix0="up mtu 9000”
in
/etc/rc.conf and
vm switch create -n 20 -m 9000 -i ix0 -p net20
. There is no route at all. Maybe a RDR in PF?
I have tried to enabled and disabled FP on both the hosts and VMs, no deferent in the speed test.
Hardware
All (tot 9) ESXI is HPE G9, some have HPE 2-port 10gbps SFP+ 530FLR-SFP, other HPE 2-port 10gbps SFP+ 560FLR and all servers aslo have HPE NC523SFP NIC, all working in full speed.
Bhyve1 is a HPE G9 with a HPE 2-port 10gbps SFP+ 560FLR card.
Bhyve2 is a HPE G8 with HPE a NC523SFP card.
FreeBSD server - backup (non Bhyve) is a Supermicro have two HPE NC523SFP NICs (same as in Bhyve2) in laggs, all working in full speed.
FW- two HPE G9 in HA with 10G Mellanox Emulex Corporation (can’t remember the model) and HPE NC523SFP NICs.
Everything is using jumbo frames.
Both Bhyve 1 and 2 did run ESXI before without problem, so the hardware is working.
It’s something in the local host network configuration in the Bhyve servers.
Does anyone have an idea about the problem?
BYHVE 1
Installation process of the hosts (I change ips, vlans and nics as it’s easer to read/see)
Code:
# freebsd-update fetch install
# pkg update && pkg upgrade
# pkg install htop nano iftop tcpdump zabbix5-agent wget
# vi /etc/ssh/sshd_config (harden)
# vi /boot/loader.conf
### CPU AESNI
aesni_load="YES"
geom_eli_load="YES"
### Lagg
#if_lagg_load=”YES”
# vi /etc/rc.conf
ifconfig_bge0="inet 10.10.10.20 netmask 255.255.255.0"
defaultrouter=”10.10.10.5”
cloned_interfaces="vlan10 vlan20
ifconfig_ix0="up mtu 9000"
ifconfig_vlan10="inet 10.10.1.20 netmask 255.255.255.0 vlan 10 vlandev ix0 mtu 9000"
ifconfig_vlan20="inet 10.20.1.20 netmask 255.255.255.0 vlan 20 vlandev ix0 mtu 9000"
pf_enable="YES"
pf_rules="/etc/pf.conf"
pflog_enable="YES"
pflog_logfile="/var/log/pflog"
kld_list="aesni coretemp vmm"
vm_enable="YES"
vm_dir=”zfs:zroot/vm"
zabbix_agentd_enable=”yes"
ntpdate_enable="YES"
ntpd_enable=”YES"
Installation process of Bhyve on the hosts
Code:
# pkg install vm-bhyve uefi-edk2-bhyve uefi-edk2-bhyve-csm grub2-bhyve
# zfs create zroot/vm
# zfs set mountpoint=/vm zroot/vm
# vm switch create -t standard -i vlan10 -m 9000 -p net10
# vm switch create -t standard -i vlan20 -m 9000 -p net20
(vm config file)
Code:
# cat /vm/freebsdtest1/freebsdtest1.conf
freebsdtest1.conf
loader="bhyveload"
cpu=2
memory=8G
network0_type="virtio-net"
network0_switch="net10"
disk0_type="virtio-blk"
disk0_name="disk0.img"
BYHVE 2
All the same except
Code:
# vi /etc/rc.conf
ifconfig_bge0="inet 10.10.10.30 netmask 255.255.255.0"
defaultrouter=”10.10.10.5”
ifconfig_ql1 ="up mtu 9000"[/cmd]
[B]Installation process of Bhyve on the hosts[/B]
[cmd]# vm switch create -n 10 -m 9000 -i ql1 -p net10
# vm switch create -n 20 -m 9000 -i ql1 -p net20
Bhyve1 with IPs and VLANS in
/etc/rc.conf
Code:
# vm switch list
NAME TYPE IFACE ADDRESS PRIVATE MTU VLAN PORTS
mgmt standard vm-mgmt - no - - bge0
net10 standard vm-net10 - no 9000 - vlan10
net20 standard vm-net20 - no 9000 - vlan20
…
Code:
# ifconfig tap1
tap1: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
description: freebsdtest1
options=80000<LINKSTATE>
ether 58:9c:fc:10:ff:ce
groups: tap vm-port
media: Ethernet autoselect
status: active
nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
Opened by PID 12127
Code:
# ifconfig vm-net10
vm-net10: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
ether 42:d4:94:07:b6:b9
id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
member: tap3 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
ifmaxaddr 0 port 35 priority 128 path cost 2000000
member: tap2 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
ifmaxaddr 0 port 34 priority 128 path cost 2000000
member: vlan10 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
ifmaxaddr 0 port 9 priority 128 path cost 2000
groups: bridge vm-switch viid-1470f@
nd6 options=9<PERFORMNUD,IFDISABLED>
Code:
# ifconfig vlan10
vlan10: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
options=200401<RXCSUM,LRO,RXCSUM_IPV6>
ether 38:ea:a7:16:66:90
inet 10.10.1.210 netmask 0xffffff00 broadcast 10.10.1.255
groups: vlan
vlan: 10 vlanproto: 802.1q vlanpcp: 0 parent interface: ix0
media: Ethernet autoselect (10Gbase-Twinax <full-duplex,rxpause,txpause>)
status: active
nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
Bhyve2 with just UP in
/etc/rc.conf
Code:
# vm switch list
NAME TYPE IFACE ADDRESS PRIVATE MTU VLAN PORTS
net10 standard vm-net10 - yes 9000 10 ql1
net20 standard vm-net20 - yes 9000 20 ql1
Code:
# ifconfig tap0
tap0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
description: vmnet-freebsdtest-0-net10
options=80000<LINKSTATE>
ether 58:9c:fc:10:ff:e2
groups: tap vm-port
media: Ethernet autoselect
status: active
nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
Opened by PID 57070
Code:
# ifconfig vm-net10
vm-net10: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
ether ca:53:a3:1e:1d:17
id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
member: ql1.10 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
ifmaxaddr 0 port 9 priority 128 path cost 2000
groups: bridge vm-switch viid-1470f@
nd6 options=9<PERFORMNUD,IFDISABLED>
Code:
# ifconfig ql1.10
ql1.10: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 9000
description: vm-vlan-net10-ql1.10
options=80000<LINKSTATE>
ether 24:be:05:ef:85:9c
groups: vlan vm-vlan viid-8f8db@
vlan: 10 vlanproto: 802.1q vlanpcp: 0 parent interface: ql1
media: Ethernet autoselect (10Gbase-SR <full-duplex>)
status: active
nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
VMs ifconfig look like this:
Code:
# ifconfig vtnet0
vtnet0: flags=8863<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
options=80028<VLAN_MTU,JUMBO_MTU,LINKSTATE>
ether 58:9c:fc:06:3e:0c
inet 10.10.1.160 netmask 0xffffff00 broadcast 10.10.1.255
media: Ethernet autoselect (10Gbase-T <full-duplex>)
status: active
nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
I’am doning some wrong things with vm-byhve setup and switch.
Any clue anyone?