FreeBSD 8.2 x86 Bad throughput, especially on longer distances

I'm using FreeBSD for a little while and I've run into a problem I can't solve, so I hope anybody can give me the magic hint. My server is connected to gigabit ethernet and booted with Linux (like GRML or CentOS) I reach throughput speeds of about 106MB/s from NL-NL and NL-DE. When I use FreeBSD, the same downloads go 70MB/s and 20MB/s. Of course I'm using wget -O /dev/zero in all cases ;) In the NL-DE case this is 80MB/s+ difference, which is really a lot!

The server is a Dell PowerEdge with, according to dmesg, em0 and em1 being an Intel PRO/1000 Legacy Network Connection 1.0.3. I've read a lot of threads online about tweaking some network settings, but they were often for older FreeBSD releases. I tried some of the tweaks but they did not help at all or they made the performance even worse.

So, every idea is really appreciated! Thanks and a happy christmas!
 
Distance doesn't have a lot to do with this. Latency of the network equipment in between does.

Tuning tips can be found in tuning(7).
 
SirDice said:
Distance doesn't have a lot to do with this. Latency of the network equipment in between does.

Tuning tips can be found in tuning(7).

Thats true of course, but in general longer distances mean higher latency if the network is not the bottleneck. With other distributions there is no impact at all, so it has to be something latency and freebsd FreeBSD related... What do you recommend me to tweak in this situation?
 
A small update:

I compiled a new FreeBSD kernel and enabled polling which lowered the speed further. I also tried setting some kernel values to the following, which also did not help:

Code:
net.inet.tcp.sendspace=131072
net.inet.tcp.recvspace=131072
net.inet.tcp.path_mtu_discovery=0

I've also been messing with the inflight, udp, fastforwarding and some other variables but no difference at all. I also installed a separate PCI-X Intel NIC but the throughput remains exactly the same. I made a dualboot with Debian now and all interfaces work at full speed with Debian so it's definetely not the hardware. Are there any known performance issues with the Intel driver for the older cards?
 
Is
Code:
sysctl -w net.inet.ip.process_options=0
affecting the network throughput in any way? What is the MTU of your network interfaces? (low MTU increases CPU usage). What is the CPU load of the wget process? (possible bottleneck)? Is your download source a FTP server or a HTTP server? (may affect CPU load)
 
The ip.process_options=0 setting seems to make the transfer go slower.

I've been tweaking around a little bit and with the following changes I'm now reaching about 90MB/s peak throughput:
Code:
#NETWORK PERFORMANCE TUNING
# set to at least 16MB for 10GE hosts
kern.ipc.maxsockbuf=16777216
# set autotuning maximum to at least 16MB too
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216
# enable send/recv autotuning
net.inet.tcp.sendbuf_auto=1
net.inet.tcp.recvbuf_auto=1
# increase autotuning step size
net.inet.tcp.sendbuf_inc=16384
net.inet.tcp.recvbuf_inc=524288
# turn off inflight limitting
net.inet.tcp.inflight.enable=0

FTP or HTTP seems to make almost no difference, about 1-2MB/s. The MTU of all interfaces is 1500.

The total CPU load at 90MB/s is 100%. 94% SYS and 6% USR (when I enable polling amost all speed goes to interrupt). WGET shows under WCPU about 44% at full speed.

The funny thing is I checked this with Debian: 106MB/s (full GE speed) shows me 8% USR and 40% SYS and 52% IDLE. So, FreeBSD needs double CPU power for about 10% less speed.
 
Show, please, output of next commands:
Code:
top -aSCHIP
vmstat -ai
vmstat -m
sysctl dev.em | fgrep -v ": 0"
netstat -ss
netstat -i
 
Thanks for the help :) The issue is solved now... Just by replacing the server with an HP. With this new server the load with 1GE is also higher than with Linux, but this one has enough power to handle it. However, with the base install the speed was still low and after applying that "#NETWORK PERFORMANCE TUNING"-part the speeds are 110MB/s flat.

Now I do have one other issue... When enabling PF the load stays low but transfer speeds are also low (~50MB/s)... Does PF cap the amount of packets flowing in any way?
 
Back
Top