Slow network performance compared to linux (again)

Hi.

About one Year ago I posted topic about network performance problems in freebsd compared to linux on hetzner.
I wasn't able to solve problem and I forgot about it. Lately things start getting worst.

I decided to make simple test:

1. Created to virtual machines on digitalocean. (don't want blame only hetzner for it). One Freebsd 12.0 on Ubuntu 18.04
2. I use the same server location to have similar results
3. Just install iptraf3 on both machines
4. Run test from my home network and from other networks

Results was like expected from my network download speed is very low from Freebsd server compared to Ubuntu.
This happens only for my home network. It is possible that my provider use some kind of QoS to limit traffic?
This don't depend on IP address, Maybe some other flags in TCP protocol?
Maybe there is some kind settings in freebsd that can mimic packets like it was from linux (to verify problem).


- for udp speed are the same (but very low)
- if i use parallel mode for iperf3 I can get similar results for freebsd and linux
- I tested lots of performance optimalization Freebsd network but none of them help (changing net.inet.tcp.cc.algorithm etc).

Any ideas?

FREEBSD SERVER
Code:
~% iperf3 -t 5 -Rc 167.71.134.232
Connecting to host 167.71.134.232, port 5201
Reverse mode, remote host 167.71.134.232 is sending
[  5] local 192.168.1.100 port 54503 connected to 167.71.134.232 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec   191 KBytes  1.56 Mbits/sec
[  5]   1.00-2.00   sec   280 KBytes  2.29 Mbits/sec
[  5]   2.00-3.00   sec   144 KBytes  1.18 Mbits/sec
[  5]   3.00-4.00   sec   184 KBytes  1.51 Mbits/sec
[  5]   4.00-5.00   sec   402 KBytes  3.29 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-5.05   sec  1.25 MBytes  2.08 Mbits/sec   51             sender
[  5]   0.00-5.00   sec  1.17 MBytes  1.97 Mbits/sec                  receiver


UBUNTU SERVER
Code:
~% iperf3 -t 5 -Rc 167.71.133.221
Connecting to host 167.71.133.221, port 5201
Reverse mode, remote host 167.71.133.221 is sending
[  5] local 192.168.1.100 port 54516 connected to 167.71.133.221 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  1.76 MBytes  14.8 Mbits/sec
[  5]   1.00-2.00   sec  2.10 MBytes  17.6 Mbits/sec
[  5]   2.00-3.00   sec  2.40 MBytes  20.1 Mbits/sec
[  5]   3.00-4.00   sec  2.37 MBytes  19.8 Mbits/sec
[  5]   4.00-5.00   sec  2.28 MBytes  19.2 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-5.00   sec  12.0 MBytes  20.2 Mbits/sec   25             sender
[  5]   0.00-5.00   sec  10.9 MBytes  18.3 Mbits/sec                  receiver
 
I'm having exactly the same problem. I just loaded a linux image via USB stick and the iperf3 has run at full speed, but returning to FreeBSD it works 10 times slower than in Linux. I tested all recomendations in several posts on this forum, but nothing worked.

Any ideas? How can it be possible having FreeBSD a superior network stack than Linux? (in theory).

Thanks in advance.
 
Thank you SirDice for being so inclined to help others!

I'm running a 12.0-RELEASE kernel:
uname -a
Code:
FreeBSD mymachine 12.0-RELEASE-p4 FreeBSD 12.0-RELEASE-p4 GENERIC  amd64


Card model:
dmesg | grep re0
Code:
re0: <RealTek 8169/8169S/8169SB(L)/8110S/8110SB(L) Gigabit Ethernet> port 0xee00-0xeeff mem 0xfdefe000-0xfdefe0ff irq 16 at device 4.0 on pci2
re0: Chip rev. 0x10000000
re0: MAC rev. 0x00000000
miibus0: <MII bus> on re0
re0: Using defaults for TSO: 65518/35/2048
re0: Ethernet address: 64:70:02:04:a6:a6
re0: netmap queues/slots: TX 1/256, RX 1/256


Ifconfig shows:
ifconfig
Code:
em0: flags=8802<BROADCAST,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=81249b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,LRO,WOL_MAGIC,VLAN_HWFILTER>
        ether 00:18:f3:df:3a:91
        media: Ethernet autoselect
        status: no carrier
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
re0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=8209b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC,LINKSTATE>
        ether 64:70:02:04:a6:a6
        inet 192.168.1.135 netmask 0xffffff00 broadcast 192.168.1.255
        media: Ethernet autoselect (1000baseT <full-duplex>)
        status: active
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
        options=680003<RXCSUM,TXCSUM,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
        inet6 ::1 prefixlen 128
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x3
        inet 127.0.0.1 netmask 0xff000000
        groups: lo
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
tap0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=80000<LINKSTATE>
        ether 00:bd:28:48:f7:00
        inet6 fe80::2bd:28ff:fe48:f700%tap0 prefixlen 64 scopeid 0x4
        inet 100.80.0.1 netmask 0xffffff00 broadcast 100.80.0.255
        groups: tap
        media: Ethernet autoselect
        status: active
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
        Opened by PID 50162


The system has one wireless card (not configured) and one vpn configured with OpenVPN.

The route command shows:
route show default
Code:
   route to: default
destination: default
       mask: default
    gateway: 192.168.1.1
        fib: 0
  interface: re0
      flags: <UP,GATEWAY,DONE,STATIC>
 recvpipe  sendpipe  ssthresh  rtt,msec    mtu        weight    expire
       0         0         0         0      1500         1         0

And netstat:
netstat -r
Code:
Routing tables

Internet:
Destination        Gateway            Flags     Netif Expire
default            192.168.1.1        UGS         re0
100.80.0.0/24      link#4             U          tap0
100.80.0.1         link#4             UHS         lo0
localhost          link#3             UH          lo0
192.168.1.0/24     link#2             U           re0
192.168.1.135      link#2             UHS         lo0

Internet6:
Destination        Gateway            Flags     Netif Expire
::/96              localhost          UGRS        lo0
localhost          link#3             UH          lo0
::ffff:0.0.0.0/96  localhost          UGRS        lo0
fe80::/10          localhost          UGRS        lo0
fe80::%lo0/64      link#3             U           lo0
fe80::1%lo0        link#3             UHS         lo0
fe80::%tap0/64     link#4             U          tap0
fe80::2bd:28ff:fe4 link#4             UHS         lo0
ff02::/16          localhost          UGRS        lo0


There are any other details you would like to consider?

Thanks a lot in advance.
 
Well, I'm upgraded to 12.1-RELEASE and the problems remain the same.

I'm using a Realtek ethernet pci card so I can only use the re0 interface as em0 interface is a wireless one, not configured yet.

The most surprising for me is that iperf3 between local machines runs at full 1000mbit/sec speed (regardless of the direction: FreeBSD to Linux or Linux to FreeBSD), but when downloading from Internet, the Linux machine can go at maximum speed while FreeBSD only at one tenth of such maximum.

Any further ideas?

Thanks in advance.
 
I could confirm that switching from re(4) to em(4) increases the bandwidth from 650+ Mbps to nearly 900Mbps with a 1000 Mbps switch. The output of
freebsd-version -ku is
Code:
14.1-RELEASE
14.1-RELEASE
 
I have to write I'm experiencing similar issues with VMs (freebsd at OVH when uploading from freeBSD machines. I kind of always blamed OVH for it, but seems it is linux/freebsd specific.

Uploads from home to OVH from freebsd machines max at about 30 mbit/sec This happens on all freebsd machines at home (2gbit fiber), and a freebsd at a friend (1 gbit fiber). When using linux, e.g. a raspberry pi 4 on same LAN, I max out the 250 mbit/sec of the VM. N.B. Even when I use a Bhyve linux VM on one of the freebsd machines I have issues with, I can max out the upload. Yet the host itself cant upload faster then 30 mbit/sec. Between my home and friend I easily max out the 1 gbit. For testing I do https downloads or scp of a 1 gbyte test file.

On the freeBSD machines (all 14.1) I have and tried all different NICs
igc0: <Intel(R) Ethernet Controller I225-V>
em0: <Intel(R) I219-LM ADL(17)
em0: <Intel(R) I218-V (2)>
igb0: <Intel(R) I211 (Copper)>

I've gone over a ton of forum post and network optimisation suggestions, but I cant seem to find the magic setting. I'm quite puzzled. Any pointers, suggestions or other help is appreciated.
 
Quickly installed a Freebsd vm on Bhyve, but that has the same poor performance uploading. Downloading same, full pipe.
 
I run FreeBSD as a vm under proxmox and generally hit around 116MB/s so whatever is causing your issue, it's not FreeBSD, my internet is generally 936Mb/s and I see the full performance on the network stack. Local Lan is gigabit also, so I'm pretty much maxxed out here. All my kernels are stock generic ones.
 
I cannot see why it cannot be FreeBSD. Mac OS, Linux machines dont have the poor upload. Even linux VM on the freebsd host with poor performance has 10x performance.
And note, it's not poor upload to any destination. I can max out upload to other destinations. But I cant explain why a scp/https transfer cant exceed 3.3Mbyte/sec while other OS, even other OS as VM on the freebsd host max out the 250 mbit vm traffic cap.
 
yes I do those two, and same result full 250 mbit (max of VM) down from OVH and up to OVH a low 25 mbit with bsd.
And when run on an ubuntu Bhyve vm on the host of the above test, 250 mbit down and almost 250 mbit up. First always hits a bit slow.

It's very frustrating, not so much the performance, just not being able to come up with any valid excuse for this. It's not the hardware, afaics, the only difference is freebsd OS.

I had some other uploads which aren't great, e.g. speedtest (ookla) but then at least it still exceeds 1 gbit and when using multi mode I can fill the 2 gbit easily. But this bad of performance I only seem to have when uploading using freebsd to OVH. So any hypotheticals, things to check are very welcome. Their support is to little or no help, which I kind of understand. You cant spend hours of FTE on an issue with a VM that costs 10-30 euro per month. But at this point, I'm just looking to migrate away. It was never really an issue, but I wanted to reverse proxy some services from home through the VM and so it is an issue now.

-- to add
With iperf3 I do see that freebsd upload gives a significant more 'retries' compared to download and the linux based tests.

Code:
Connecting to host OVH, port 5201
[  5] local 1.2.3.4 port 18755 connected to 4.3.2.1 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  3.75 MBytes  31.5 Mbits/sec   20   42.7 KBytes     
[  5]   1.00-2.00   sec  2.62 MBytes  22.0 Mbits/sec   11   37.5 KBytes     
[  5]   2.00-3.00   sec  3.50 MBytes  29.4 Mbits/sec   10   51.8 KBytes     
[  5]   3.00-4.00   sec  3.12 MBytes  26.2 Mbits/sec   13   25.3 KBytes     
[  5]   4.00-5.00   sec  3.25 MBytes  27.3 Mbits/sec    5   61.3 KBytes     
[  5]   5.00-6.00   sec  3.12 MBytes  26.2 Mbits/sec   11   35.5 KBytes     
[  5]   6.00-7.00   sec  2.62 MBytes  22.0 Mbits/sec    6   38.7 KBytes     
[  5]   7.00-8.00   sec  4.00 MBytes  33.6 Mbits/sec    1   72.2 KBytes     
[  5]   8.00-9.00   sec  3.00 MBytes  25.2 Mbits/sec   15   61.3 KBytes     
[  5]   9.00-10.00  sec  2.75 MBytes  23.1 Mbits/sec    7   55.7 KBytes     
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  31.8 MBytes  26.6 Mbits/sec   99             sender
[  5]   0.00-10.01  sec  31.6 MBytes  26.5 Mbits/sec                  receiver
 
iRobbery Was your iperf test over the internet or using a vpn? I've never had such low speeds on a physical FreeBSD machine, but I have while using a VM or due to firewall settings. I will blindly throw you some suggestions.

1. is there a common network cable that could be bad?
2. have you disabled some of the network card offloading settings?
Code:
ifconfig_em0="inet 192.168.1.133/24 -rxcsum -rxcsum6 -txcsum -txcsum6 -lro -tso -vlanhwtso -vlanhwfilter"
 
Just a guess: net.link.ifqmaxlen should be > bits-per-second / kern.hz / 5000

Otherwise, when I am observing gross bandwidth degradation, I usually look into a piece of tcpdump (using plain ssh or fetch) to see what my actual bw * delay product is and how that figures, i.e. when the ack for a packet actually comes back. Because somewhere the thing is waiting, and one can usually figure out where.
 
iRobbery Was your iperf test over the internet or using a vpn? I've never had such low speeds on a physical FreeBSD machine, but I have while using a VM or due to firewall settings. I will blindly throw you some suggestions.

1. is there a common network cable that could be bad?
2. have you disabled some of the network card offloading settings?
Code:
ifconfig_em0="inet 192.168.1.133/24 -rxcsum -rxcsum6 -txcsum -txcsum6 -lro -tso -vlanhwtso -vlanhwfilter"
Thanks for your reply,

It is however a VM as I wrote. And I strongly suspect some traffic filtering on their side, the ddos protection or some. I did try these settings too, I was very happy to read a similar post but disappointed when it did nothing for me. That happens :)
 
iperf is heavily CPU-bound and I often have seen it performing badly within VMs unless you run comical numbers of threads (e.g. 10-20 on an alleged 4-core VM).

I long resorted to instead of running iperf3 inside a VM, sending the iperf3 traffic from 2 hosts (but _not_ the VM host) *through* that VM. This gives much more practical figures and usually is very close to link-speed. If not, there's an issue with the underlying network of the VM (i.e. the hypervisor, its network configuration or crappy NICs - i.e. realtek...)
 
keep in mind download from OVH works fine, full speed of VM (250 mbit), upload to same VM using bsd client, 20-30 mbit.

If I use a linux client (which is a VM on the freebsd client used) I get both directions full 250 mbit (vm cant do more, it's the service limit).

What bothers me is that I cant come up with any valid reason why a freebsd client would be limited.
 
I suspect this is because of the TCP stack used by the default GENERIC kernel. I can only achieve 600+Mbps with my re(4) NIC using iperf3 -c, yet I could get 900Mbps bandwidth on the same NIC with UDP using iperf3 -c -u. I checked the congestion control algorithm on my 14.0 FreeBSD which is cubic (cc_cubic(4)), while Linux may use a more aggressive algorithm like tcp_bbr(4). In FreeBSD, there exists other options (e.g. cc_htcp(4), tcp_rack(4)), and mod_cc(9) is used to dynamically change the algorithm. I didn't test those algorithm myself, and each of those require building the kernel manully.
 
I suspect this is because of the TCP stack used by the default GENERIC kernel. I can only achieve 600+Mbps with my re(4) NIC using iperf3 -c, yet I could get 900Mbps bandwidth on the same NIC with UDP using iperf3 -c -u. I checked the congestion control algorithm on my 14.0 FreeBSD which is cubic (cc_cubic(4)), while Linux may use a more aggressive algorithm like tcp_bbr(4). In FreeBSD, there exists other options (e.g. cc_htcp(4), tcp_rack(4)), and mod_cc(9) is used to dynamically change the algorithm. I didn't test those algorithm myself, and each of those require building the kernel manully.
I did switch between cubic and htcp but since that gave zero difference I was not convinced enough to try the others. Felt like it was no use, however after your post I thought okay, lets give tcp_bbr a try and it worked. Excellent Sir, many thanks.
 
Back
Top