vmx showing less performance in FreeBSD13

We have changed NIC from E1000V to vmx and we see a drop in performance when we run netperf.
Also I see
Code:
vmx0: flags=8863<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=4800038<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,NOMAP>
        ether 00:0c:29:0c:5a:f3
        inet 10.10.0.28 netmask 0xffffffe0 broadcast 10.10.0.31
        media: Ethernet autoselect
        status: active
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
media Is autoselect only but in em0 we full-duplex, Can we say vmx is in full-duplex mode? Is it known issue?
Code:
em0: flags=8863<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=481249b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,LRO,WOL_MAGIC,VLAN_HWFILTER,NOMAP>
        ether 00:50:56:a7:93:8f
        inet 10.10.4.18 netmask 0xffffffe0 broadcast 10.10.4.31
        media: Ethernet autoselect (1000baseT <full-duplex>)
 

Attachments

  • 740_183_1.png
    740_183_1.png
    9.2 KB · Views: 105
IIRC the single queue that is available by default severely limits vmx performance.
If you are using ESXi and have MSI-X available, you can disable hw.pci.honor_msi_blacklist to use all available queues (see vmx(4)).

I haven't used VMware for quite a while (and never in production...), but if they offer virtio devices you should always go with them as they are far more optimized than drivers for the (proprietary and closed source) VMXNET virtual interfaces...
 
IIRC the single queue that is available by default severely limits vmx performance.
If you are using ESXi and have MSI-X available, you can disable hw.pci.honor_msi_blacklist to use all available queues (see vmx(4)).

I haven't used VMware for quite a while (and never in production...), but if they offer virtio devices you should always go with them as they are far more optimized than drivers for the (proprietary and closed source) VMXNET virtual interfaces...
We are using ESXI 6.7 and disable hw.pci.honor_msi_blacklist is 1.
We see small improvement with -rxcsum -rxcsum6 -txcsum -txcsum6 -lro -tso -vlanhwtso

But atleast we expected to be better then E1000V
 

Attachments

  • 522_40_1.png
    522_40_1.png
    3.1 KB · Views: 99
Just waking this up. I can see there are a number of bugs related to he vmx driver for FreeBSD I had added to the bug referenced for what good that will do.
It seems the bugs are not being picked up by anyone in the dev team. Status showing as new with no validation from the developers.

the VMXNET3 VMWARE ESXI adaptor presents as vmx in FreeBSD 13 and is problematic. Performance is degraded substantially over linux vm's which all run the same physical hardware (Broadcom NetXtreme II quad port) at wire speed 1G.

VMWARE is one of the largest virtualisation platforms. VMXNET3 is the default adaptor. It would b egood to see BSD OS's perform well on ESXI.

Cheers
Tony
 
Back
Top