Hello, I have the following setup.
Fileserver with integrated gblan (re0) bridged (bridge0) with another 100mbit pci nic, which is connected to a 100mbit switch.
Atom Ion HTPC with integrated gblan (re0), connected directly with cat5e cable to the integrated gblan on the fileserver. It pxeboots from fserver and uses nfs as root.
Both machines are running freebsd-8.0 release and software compiled from ports.
The setup works like a charm, except there's something fishy about network throughput which I can't figure out. I've been googling and reading all the forums and mailing lists, tried pretty much all the tweaks I can find.
iperf gives static throughput of 400-500mbit, peaking even at 600mbit. This is confirmed using iftop -i bridge0. Not quite gblan, but close enough for cheap realtek chips.
both ends agree on media: Ethernet autoselect (1000baseT <full-duplex>) and forcing it doesn't change a thing.
The odd part is all the other protocols.
nfs: maximum throughput around 200mbit (dd and iftop both agree on this)
smb: maximum throughput around 100mbit (utilizing samba and checking iftop confirms)
Practically everything works pretty much the same speed it did with 100mbit network (or 2x the speed at max). I've read that gblan should give over 50MB/s easily (some claim over 100MB/s, which is near the theoretical maximum of Gbit).
Disk IO on the server is fine, around 70-80MB/s as expected.
Everything on the server and client idles when nfs and/or samba are utilized. However, with iperf, I did get the server cpu utilized to the max (interrupts 80%) when polling was disabled.
Currently, I have the defaults in sysctl and ifconfig (except polling enabled on server). AIO is enabled in smb.conf, nfs is as it ships with freebsd. I've tried rsize and wsize tuning, but it seems like all the fine tuning parameters I can find have only marginal gain (10%-20%) compared to the 2x-4x speedup I'm missing here. All the tunings still show similar pattern: iperf is 2x-4x times faster than anything else.
So the question is: why on earth iperf is 500mbit, but nfs, samba and others aren't? Some people say it's the crappy realtek chips to blaim and I'd tolerate this explanation if iperf wasn't so much faster than any other "real" protocol. Even if it was realtek's fault, I'd like to have some technical insight why iperf performance is so much higher.
Fileserver with integrated gblan (re0) bridged (bridge0) with another 100mbit pci nic, which is connected to a 100mbit switch.
Atom Ion HTPC with integrated gblan (re0), connected directly with cat5e cable to the integrated gblan on the fileserver. It pxeboots from fserver and uses nfs as root.
Both machines are running freebsd-8.0 release and software compiled from ports.
The setup works like a charm, except there's something fishy about network throughput which I can't figure out. I've been googling and reading all the forums and mailing lists, tried pretty much all the tweaks I can find.
iperf gives static throughput of 400-500mbit, peaking even at 600mbit. This is confirmed using iftop -i bridge0. Not quite gblan, but close enough for cheap realtek chips.
both ends agree on media: Ethernet autoselect (1000baseT <full-duplex>) and forcing it doesn't change a thing.
The odd part is all the other protocols.
nfs: maximum throughput around 200mbit (dd and iftop both agree on this)
smb: maximum throughput around 100mbit (utilizing samba and checking iftop confirms)
Practically everything works pretty much the same speed it did with 100mbit network (or 2x the speed at max). I've read that gblan should give over 50MB/s easily (some claim over 100MB/s, which is near the theoretical maximum of Gbit).
Disk IO on the server is fine, around 70-80MB/s as expected.
Everything on the server and client idles when nfs and/or samba are utilized. However, with iperf, I did get the server cpu utilized to the max (interrupts 80%) when polling was disabled.
Currently, I have the defaults in sysctl and ifconfig (except polling enabled on server). AIO is enabled in smb.conf, nfs is as it ships with freebsd. I've tried rsize and wsize tuning, but it seems like all the fine tuning parameters I can find have only marginal gain (10%-20%) compared to the 2x-4x speedup I'm missing here. All the tunings still show similar pattern: iperf is 2x-4x times faster than anything else.
So the question is: why on earth iperf is 500mbit, but nfs, samba and others aren't? Some people say it's the crappy realtek chips to blaim and I'd tolerate this explanation if iperf wasn't so much faster than any other "real" protocol. Even if it was realtek's fault, I'd like to have some technical insight why iperf performance is so much higher.