Link speed incredibly slow

Greetings,

I'm running FreeBSD 13.3-RELEASE-p5 on a VPS (Virtual Private Server) with a hosting provider.
I have noticed that ever since my VPS was turned up, the link speed has been incredibly slow.
Downloading large files is a chore, as it takes forever to download them. The same goes for
trying to download a file directly from my VPS using Filezilla or SFTP from the CLI of my
client system here at home.

Trying to update the packages takes forever using 'pkg update' and 'pkg upgrade' (notice the ETA)...

Code:
[carltonfsck@ssh ~]$ sudo pkg update -f
Password:
Updating FreeBSD repository catalogue...
Fetching meta.conf: 100%    178 B   0.2kB/s    00:01
Fetching data.pkg:   1%   83 KiB   7.2kB/s    21:36 ETA

Or, just trying to install a package takes a very long time (also, notice the ETA)....

Code:
[carltonfsck@ssh ~]$ sudo pkg install nmap
Password:
Updating FreeBSD repository catalogue...
Fetching data.pkg:  10%  760 KiB   4.3kB/s    23:51 ETA

I ran a speedtest and the results confirm that the link speed is severely degraded...
Code:
[carltonfsck@ssh ~]$ speedtest -s 55630

   Speedtest by Ookla

      Server: Lunavi Inc - Cheney, KS (id: 55630)
         ISP: Arp Networks
Idle Latency: 26.60 ms (jitter: 0.05ms, low: 26.56ms, high: 26.68ms)
    Download: 1.16 Mbps (data used: 1.3 MB)
                 26.54 ms (jitter: 0.33ms, low: 26.25ms, high: 37.25ms)
      Upload: 0.17 Mbps (data used: 197.1 kB)
                 26.50 ms (jitter: 0.12ms, low: 26.26ms, high: 26.97ms)
 Packet Loss: Not available.
  Result URL: [URL]https://www.speedtest.net/result/c/e24bdde1-247b-43bf-94c4-c30e9982e5fe[/URL]
[carltonfsck@ssh ~]$


I've reached out to support and they said that they hadn't noticed any issues with other users on the same Hypervisor.
They ran some 'wget' commands to retrieve some files, as well as ran a speedtest on the primary host (Hypervisor) without issues.

Is anyone else experiencing a similar issue on 13.3? Perhaps there's something wonky with the network stack?? I'm at a loss to what's going on here. How do I investigate further to determine what's causing the link speed to be severely degraded?


Please tell me what you need me to provide in terms of command output to diagnose the issue.



Thanks in advance!!

--Carltonfsck
 
TSO is often mentioned at the same time as slow VM networking.

e.g. https://forums.freebsd.org/threads/slow-network-performance-compared-to-linux.67200/post-668125

or

 
TSO is often mentioned at the same time as slow VM networking.

e.g. https://forums.freebsd.org/threads/slow-network-performance-compared-to-linux.67200/post-668125

or


Thanks for the reply and info!

So I ran the following command and this is what I got....

[carltonfsck@ssh ~]$ sysctl net.inet.tcp.cc.algorithm
net.inet.tcp.cc.algorithm: newreno
 
Messed around loads with TSO stuff, no joy for me in my proxmox instances, biggest gains I got was through...

Code:
cat /boot/loader.conf

autoboot_delay="2"
kern.eventtimer.periodic = 1
kern.hz=100
tcp_bbr_load="YES"

hw.vga.textmode="0"
vbe_max_resolution="720p"


Changing congestion control was the key for me.
 
Messed around loads with TSO stuff, no joy for me in my proxmox instances, biggest gains I got was through...

Code:
cat /boot/loader.conf

autoboot_delay="2"
kern.eventtimer.periodic = 1
kern.hz=100
tcp_bbr_load="YES"

hw.vga.textmode="0"
vbe_max_resolution="720p"


Changing congestion control was the key for me.

Thanks for your input!

Do I just add the 'tcp_bbr_load="YES" ' line to my /boot/loader.conf?
 
Maybe that hoster is using broken macVtap interfaces for their VMs?
 
So, I added the two items as I was instructed...

[carltonfsck@ssh ~]$ cat /boot/loader.conf
tcp_bbr_load="YES"

[carltonfsck@ssh ~]$


[carltonfsck@ssh ~]$ cat /etc/sysctl.conf
net.inet.ip.random_id=1
net.inet.tcp.functions_default=bbr
[clartonfsck@ssh ~]$


I gave it a reboot and the issue still persists. Furthermore, I noticed the sysctl value did not change...


[carltonfsck@ssh ~]$ sysctl net.inet.tcp.functions_default
net.inet.tcp.functions_default: freebsd
[carltonfsck@ssh ~]$


Then I attempted to change the sysctl value on the fly...


[carltonfsck@ssh ~]$ sudo sysctl net.inet.tcp.functions_default=bbr
net.inet.tcp.functions_default: freebsd
sysctl: net.inet.tcp.functions_default=bbr: No such file or directory
[carltonfsck@ssh ~]$


Looks like I need to add an option to the kernel and recompile in order for this to work?
 
I don't think that problem has anything to do with congestion control...

If disabling offloading (TSO, LRO, RX-/TXCSUMs) doesn't solve it, there's most likely a problem with the hypervisor (KVM) and its network stack. Given that reports of very low network performance with FreeBSD/OPN-/PFsense and OpenBSD on proxmox are also relatively frequent in other forums lately, I suspect the linux folks have introduced various regressions in their KVM networking...

What hosting provider is that? Are they offering FreeBSD images or is this another one of those "we support all OS as long as they are linux" hosters? Theres a lengthy thread about VPS hosters for FreeBSD, in case you want to have a look at other options...
 
most likely an error, since it requires a custom kernel with 'options TCPHPTS' set (as written in the very first paragraph of the manpage...).
You are correct:

[carltonfsck@ssh ~]$ sudo kldload tcp_bbr
Password:
kldload: can't load tcp_bbr: No such file or directory
[carltonfsck@ssh ~]$


The provider is called ARP Networks and they provide provide FreeBSD VPS's, along with Linux, etc.
 
So, per sko's recommendation, I disabled offloading (ifconfig eth0 -rxcsum -txcsum -lro -tso)...

[carltonfsck@ssh ~]$ ifconfig eth0

eth0: flags=8863<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=4c07bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,LINKSTATE,TXCSUM_IPV6>

[carltonfsck@ssh ~]$ sudo ifconfig eth0 -rxcsum -txcsum -lro -tso
Password:
[carltonfsck@ssh ~]$ ifconfig eth0
eth0: flags=8863<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=4c00b8<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,VLAN_HWTSO,LINKSTATE,TXCSUM_IPV6>


From there, I went ahead and did some downloads that would previously bottleneck and now everything's flying!! Download link speed is EXTREMELY faster!!

[carltonfsck@ssh ~]$ wget https://download.teamviewer.com/download/TeamViewerQS_x64.exe
--2024-09-03 06:45:10-- https://download.teamviewer.com/download/TeamViewerQS_x64.exe

Connecting to dl.teamviewer.com (dl.teamviewer.com)|104.16.63.16|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 32772472 (31M) [application/octet-stream]
Saving to: ‘TeamViewerQS_x64.exe.1’

TeamViewerQS_x64.exe.1 100%[=============================>] 31.25M 67.7MB/s in 0.5s

2024-09-03 06:45:10 (67.7 MB/s) - ‘TeamViewerQS_x64.exe.1’ saved [32772472/32772472]

[carltonfsck@ssh ~]$


I also ran a 'pkg update' and it's much faster too...

[carltonfsck@ssh ~]$ sudo pkg update -f
Updating FreeBSD repository catalogue...
Fetching meta.conf: 100% 178 B 0.2kB/s 00:01
Fetching data.pkg: 100% 7 MiB 7.3MB/s 00:01
Processing entries: 100%
FreeBSD repository update completed. 34414 packages processed.
All repositories are up to date.
[carltonfsck@ssh ~]$


Thank YOU, everyone for your assistance!! Especially 'sko'. :)
 
Errmmm isn‘t that the TSO stuff I mentioned in #2 second link?

Great you’ve got it working, I’ve been meaning to have a look at Proxmox sometime so good to have some information in advance about tweaks I might need.
 
Errmmm isn‘t that the TSO stuff I mentioned in #2 second link?

Great you’ve got it working, I’ve been meaning to have a look at Proxmox sometime so good to have some information in advance about tweaks I might need.
Yes, Sir. It was. But I was hesitant at first with making the changes, so I wanted to see what other alternatives there were, which didn't work unfortuantely. Sro basically confirmed what you provided in the link, so I decided to cave in and give it a go. But I do thank YOU as well! Appreciate you, Sir.
 
Back
Top