Slow network performance compared to linux.

I have problem with network performance for single stream - downloading big files from server etc.
I notice if I run linux on server it works better. I make some test using iperf3.

Its probably not network driver because bad results are for em0 and for re0.

Maybe there is some kind of setting that throttle traffic?

Bellow you can see my results. I tested the same server using rescue system for FREEBSD 11.1 and LINUX 4.9.111
Server is located in germany (hetzner) I tested it from my home network with is limited to 30mbps.

Single connection FreeBSD (iperf3 -Rc)
Code:
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  9.00 MBytes  7.55 Mbits/sec  174             sender
[  4]   0.00-10.00  sec  8.84 MBytes  7.42 Mbits/sec                  receiver

10 Parallel connections FreeBSD (iperf3 -P 10 -Rc)
Code:
[ ID] Interval           Transfer     Bandwidth       Retr
[SUM]   0.00-10.00  sec  37.2 MBytes  31.2 Mbits/sec  573             sender
[SUM]   0.00-10.00  sec  35.9 MBytes  30.1 Mbits/sec                  receiver

Single connection Linux
Code:
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  36.1 MBytes  30.3 Mbits/sec   69             sender
[  4]   0.00-10.00  sec  35.6 MBytes  29.8 Mbits/sec                  receiver

Any suggestions?
 
... I tested the same server using rescue system for FREEBSD 11.1 and LINUX 4.9.111
Server is located in germany (hetzner) ...
Any suggestions?

TSO of the network adapters of some virtual environments does not play well together with FreeBSD. I have running some AWS-EC2 instances, and 2 showed poor network performance as well, until I disabled TSO.

In my /etc/rc.conf I got: ifconfig_DEFAULT="SYNCDHCP -rxcsum -txcsum -lro -tso -vlanhwtso"

Perhaps you see a similar issue with the Hetzner server.
 
You might want to visit the freebsd-net mailing list with you problem, the FreeBSD devs responsible for the netcode and drivers will be happy to help you.
 
What is your sysctl net.inet.tcp.cc.algorithm? I've found cubic to usually be the best for high-throughput over low-loss links, especially on medium to high latency connections, but the last testing I did of them was some time ago.
 
Back
Top