Changing the Hypervisor could possibly solve this issue, but its not an option for me, because the products you mentioned are also far away from being perfect. And because I am VMware VMUG member and I am professionally using VMware products.
Trying iperf with predumbed files is worth a try, but this should not be the reason for such a poor performance.
Jumbo frames are enabled on my physical switch, the VMware vSwitch and on some server VMs (for iSCSI and NFS networks). But enabling jumbo frames in client systems is always a bad idea. Its a workaround if you have slow hardware or if you are using non optimized OSs and drivers. Come on, we live in 2023 and every modern processor can deal with 10G non firewalled throughput.
If a OS and drivers are optimized, there is no need for jumbo frames. Todays hardware is fast enough to deal with high throughput and default MTU. This is the result I have made with MTU 1500 today between my windows workstation and VMs with the following OS installed (single flow connections only from windows workstation to VM):
Fresh FreeBSD 13.2 install: 5Gbit/s
Matured OPNsense 23.7.4 installation: 2,7Gbit/s (based on FreeBSD13.2)
Fresh OPNsense 23.7.4 installation with pf enabled and a allow all rule: 2,8Gbit/s
Fresh OPNsense 23.7.4 installation with deactivated pf firewall: 3,2Gbit/s
Ubuntu 22.04 LTS: over 9Gbit/s
OmniOS: 3,5Gbit/s
You can clearly see that its not a hypervisor topic. The ESXi is fast enough and Ubuntu can nearly reach line speed (would be the same with Windows VM). The reason why all other OSs are slow: They lack of OS and driver optimization. For instance:
https://www.illumos.org/issues/15907
I am using a CISCO CBS350-8MGP-2X. My home is cabled with CAT7 and I cant see any frame or packet retransmisions or drops on the switch or in the hypervisor. You can also clearly see (Ubuntu) that the network is not the reason. Line speed is possible here.