VMXNet driver with FreeBSD 8.0

I just finished an install of FreeBSD 8.0-Stable via CD installation as a virtual host under ESXi 4 Update 1. The installation went smooth and I even was able to install VMWare tools after compiling them from source. I have the VMXNet2 Enhanced driver selected for the network and have configured in the rc.conf file an entry for
Code:
ifconfig_vxn0="inet 192.168.1.5 netmask 255.255.240.0"
ifconfig_vxn0="up"
During the boot process vmware tools are successfully loaded but the address is not assigned. If I manually run ifconfig to assign the address after boot everything works fine until the next restart. Alternatively if I switch to the e1000 driver and switched the rc.conf file references for ifconfig_vxn0 to ifconfig_em0 everything works fine. Any suggestions on how to correct this to get the VMXNet2 Enhanced driver to run under FreeBSD 8.0?
 
A simplest method would be to add some sleep interval at the end of vmware tools startup but that would not help if vmware tools are started AFTER assinging addresses by other startup script.

Show here output of that command:
# rcorder /etc/rc.d/* /usr/local/etc/rc.d/*
 
Omit the
Code:
ifconfig_vxn0="up"
line. Setting an IP address on an interface already implies "up". You are now probably overruling the first command with the second one.
 
DutchDaemon said:
Omit the
Code:
ifconfig_vxn0="up"
line. Setting an IP address on an interface already implies "up". You are now probably overruling the first command with the second one.

That fixed it, thanks!
 
I'm curious to know whether you have found a performance benefit from running the vmxnet2 driver over telling VMWare to use an e1000 which FreeBSD sees as em0 etc.

Thanks,

Hugh
 
hblandford said:
I'm curious to know whether you have found a performance benefit from running the vmxnet2 driver over telling VMWare to use an e1000 which FreeBSD sees as em0 etc.

Thanks,

Hugh

There are some fundamental differences between the e1000 and vmxnet2, both work well but there are some features that are part of vmxnet2 that are not in the e1000 driver; specifically hardware offloading and jumbo frames. The other difference depends upon the physical host, if the host supports hardware virtualization you will "hypothetically" see lower latency while if you don't have hardware virtualization then the e1000 will carry lower cpu usage. On the flip side if you need large ring sizes but don't need jumbo frames the e1000 driver is what you'll need. For myself I was looking to use vmxnet2 because I was trying to setup an NFS link in which case jumbo frames would come in handy.
 
Back
Top