Slow network performance compared to linux (again)

I did switch between cubic and htcp but since that gave zero difference I was not convinced enough to try the others. Felt like it was no use, however after your post I thought okay, lets give tcp_bbr a try and it worked. Excellent Sir, many thanks.
Ahh, this is finally vegas improved! Interesting indeed, thanks!

cc_vegas is the only one of the older cc algos that is not based on loss detection. But since it is fairness-oriented, it cannot cope with other more aggressive loss-based algos on the same link. (Nevertheless it was the only algo that made a real difference on some of my more difficult links, and it also gets recommended for satlink).
This here now seems to redo vegas with improved control routines as can be done with today's compute power.
 
By luck I find this thread from Google. And I understand there is some frustration about TCP performance.

Problem finder:
grzegorz-derebecki
ommarmol
iRobbery

Problem description:
TCP low performance in FreeBSD VMs vs. TCP higher performance in Linux VMs.
The iperf3 shows TCP 'retries' (retransmissions), which means packet drops in path.

Analysis:
This means TCP congestion control in effect of a FreeBSD VM.
Currently in default, both FreeBSD 14 and Ubuntu Linux up to 24.04 use TCP CUBIC.

I have some confidence to say that try some FreeBSD 15.0-CURRENT snapshorts as VMs, and see if these commits can help improve the performance in your case.
reference:
 
By luck I find this thread from Google. And I understand there is some frustration about TCP performance.

Problem finder:
grzegorz-derebecki
ommarmol
iRobbery

Problem description:
TCP low performance in FreeBSD VMs vs. TCP higher performance in Linux VMs.
The iperf3 shows TCP 'retries' (retransmissions), which means packet drops in path.

Analysis:
This means TCP congestion control in effect of a FreeBSD VM.
Currently in default, both FreeBSD 14 and Ubuntu Linux up to 24.04 use TCP CUBIC.

I have some confidence to say that try some FreeBSD 15.0-CURRENT snapshorts as VMs, and see if these commits can help improve the performance in your case.
reference:
TCP performance of a FreeBSD VM is reported in Network throughput and CPU efficiency of FreeBSD 14.2 and Debian 10.2 in VMware - PART 2. Without changing the congestion algorithm, it is shown that the bandwidth could be raised from around 1.5Gb/s to more than 12Gb/s by tuning parameters.
 
TCP performance of a FreeBSD VM is reported in Network throughput and CPU efficiency of FreeBSD 14.2 and Debian 10.2 in VMware - PART 2. Without changing the congestion algorithm, it is shown that the bandwidth could be raised from around 1.5Gb/s to more than 12Gb/s by tuning parameters.
🤝Good to know these benchmarks without TCP congestion control in effect. There are definitely things that are irrelevant to packet lost that can improve performance.

Also want to share my benchmark of FreeBSD vs. a Linux kernel, in a different way. I know it might be off the topic. ;)
 
Back
Top