Net autotuner

This looks like a cool utility, thank you.

My shell is a little rusty, but it does not look like it tests against the environment variable TARGET_HOST when running this from a shell (with the -n flag)? Perhaps it is not intended to be used this way.

I tried it out by editing the file directly and setting a publicly-available IP in Toronto, Canada. I am trying to tune network parameters for a VPS in Germany (Hetzner, FreeBSD 15). I am testing with iperf3, unfortunately it does not seem like the suggested changes improve test results. I tested with single transfer as well as 8 (-P 8) in both directions (default, and -R). Testing against a 1 Gbps symmetric home fiber connection through the TekSavvy ISP which resells Bell fiber. I am not looking for dynamic changes, but suggested tuning that may improve data transfer speeds so I am running it in a shell and applying the suggested changes.

The suggested changes, for reference:
Bash:
% cat TEST.txt
[DRY-RUN] net.isr.defaultqlimit=2048                  -> /boot/loader.conf (requires reboot)
[DRY-RUN] net.isr.maxqlimit=16384                     -> /boot/loader.conf (requires reboot)
[DRY-RUN] net.isr.maxthreads=4                        -> /boot/loader.conf (requires reboot)
[DRY-RUN] net.isr.bindthreads=1                       -> /boot/loader.conf (requires reboot)
[DRY-RUN] net.inet.tcp.recvbuf_auto=1                 -> /etc/sysctl.conf
[DRY-RUN] net.inet.tcp.sendbuf_auto=1                 -> /etc/sysctl.conf
[DRY-RUN] net.inet.tcp.path_mtu_discovery=1           -> /etc/sysctl.conf
[DRY-RUN] kern.ipc.nmbclusters=262144                 -> /etc/sysctl.conf
[DRY-RUN] kern.ipc.nmbjumbop=262144                   -> /etc/sysctl.conf
[DRY-RUN] kern.ipc.nmbjumbo9=65536                    -> /etc/sysctl.conf
[DRY-RUN] kern.ipc.nmbjumbo16=32768                   -> /etc/sysctl.conf
[DRY-RUN] net.inet.udp.recvspace=65536                -> /etc/sysctl.conf
[DRY-RUN] net.inet.tcp.functions_default=freebsd      -> /etc/sysctl.conf
[DRY-RUN] net.inet.tcp.rexmit_min=100                 -> /etc/sysctl.conf
[DRY-RUN] net.inet.tcp.rexmit_slop=400                -> /etc/sysctl.conf
[DRY-RUN] net.inet.tcp.rexmit_initial=2000            -> /etc/sysctl.conf
[DRY-RUN] net.inet.tcp.cc.algorithm=cubic             -> /etc/sysctl.conf
[DRY-RUN] net.inet.tcp.recvspace=262144               -> /etc/sysctl.conf
[DRY-RUN] net.inet.tcp.sendspace=262144               -> /etc/sysctl.conf
[DRY-RUN] net.inet.tcp.recvbuf_max=262144             -> /etc/sysctl.conf
[DRY-RUN] net.inet.tcp.sendbuf_max=262144             -> /etc/sysctl.conf
[DRY-RUN] net.inet.tcp.ecn.enable=0                   -> /etc/sysctl.conf
[DRY-RUN] kern.ipc.maxsockbuf=2097152                 -> /etc/sysctl.conf
[DRY-RUN] net.inet.ip.intr_queue_maxlen=100           -> /etc/sysctl.conf
[DRY-RUN] net.inet.tcp.always_keepalive=0             -> /etc/sysctl.conf
[DRY-RUN] net.inet.tcp.delayed_ack=0                  -> /etc/sysctl.conf
[DRY-RUN] net.inet.tcp.delacktime=40                  -> /etc/sysctl.conf
[DRY-RUN] net.inet.tcp.sack.enable=0                  -> /etc/sysctl.conf
[DRY-RUN] net.inet.tcp.delayed_ack=1                  -> /etc/sysctl.conf
[DRY-RUN] net.inet.tcp.recvbuf_max=2097152            -> /etc/sysctl.conf
[DRY-RUN] net.inet.tcp.sendbuf_max=2097152            -> /etc/sysctl.conf
[DRY-RUN] net.inet.tcp.recvspace=131072               -> /etc/sysctl.conf
[DRY-RUN] net.inet.tcp.sendspace=131072               -> /etc/sysctl.conf
[DRY-RUN] net.inet.tcp.initcwnd_segments=10           -> /etc/sysctl.conf
[DRY-RUN] net.inet.tcp.abc_l_var=4                    -> /etc/sysctl.conf
[DRY-RUN] net.inet.tcp.reass.maxqueuelen=256          -> /etc/sysctl.conf

Edit:
Thought I would include the guide that I have relied on in the past: https://calomel.org/freebsd_network_tuning.html
That guide has been developed on older FreeBSD releases, however.
 
My shell is a little rusty, but it does not look like it tests against the environment variable TARGET_HOST when running this from a shell (with the -n flag)? Perhaps it is not intended to be used this way.

A valid example of the output is the following:


Starting network autotuner (host=8.8.8.8, iface=wlan0, interval=60s)
Simulation mode active (dry-run): no changes applied, only logged.
[DRY-RUN] net.inet.tcp.recvbuf_auto=1 -> /etc/sysctl.conf
[DRY-RUN] net.inet.tcp.sendbuf_auto=1 -> /etc/sysctl.conf
[DRY-RUN] net.inet.tcp.path_mtu_discovery=1 -> /etc/sysctl.conf
[DRY-RUN] net.isr.defaultqlimit=2048 -> /boot/loader.conf (requires reboot)
[DRY-RUN] net.isr.maxqlimit=16384 -> /boot/loader.conf (requires reboot)
[DRY-RUN] net.isr.maxthreads=4 -> /boot/loader.conf (requires reboot)
[DRY-RUN] net.isr.bindthreads=1 -> /boot/loader.conf (requires reboot)


The default “TARGET_HOST” variable uses public DNS servers that allow “Echo Reply” in most cases.
You can see how the value of the variable “TARGET_HOST” is shown in the first line “host=8.8.8.8”.

In test mode, no changes are applied, so you won't see any improvement. You must run it in normal mode for the changes to be permanently written and take effect, and then restart your system.

For more realistic results, consider additional network loads (such as multiple downloads and uploads using various protocols like HTTP, FTP, P2P, etc.); the type of additional network loads depends on how you intend to use the server or PC.

Performance differences on fast networks during simple downloads are not particularly significant.

That's why the idea is for them to be dynamic, so the kernel's network variables adapt according to the network loads that exist at any given time.

Thanks and enjoy it.
 
A valid example of the output is the following:


Starting network autotuner (host=8.8.8.8, iface=wlan0, interval=60s)
Simulation mode active (dry-run): no changes applied, only logged.
[DRY-RUN] net.inet.tcp.recvbuf_auto=1 -> /etc/sysctl.conf
[DRY-RUN] net.inet.tcp.sendbuf_auto=1 -> /etc/sysctl.conf
[DRY-RUN] net.inet.tcp.path_mtu_discovery=1 -> /etc/sysctl.conf
[DRY-RUN] net.isr.defaultqlimit=2048 -> /boot/loader.conf (requires reboot)
[DRY-RUN] net.isr.maxqlimit=16384 -> /boot/loader.conf (requires reboot)
[DRY-RUN] net.isr.maxthreads=4 -> /boot/loader.conf (requires reboot)
[DRY-RUN] net.isr.bindthreads=1 -> /boot/loader.conf (requires reboot)


The default “TARGET_HOST” variable uses public DNS servers that allow “Echo Reply” in most cases.
You can see how the value of the variable “TARGET_HOST” is shown in the first line “host=8.8.8.8”.

In test mode, no changes are applied, so you won't see any improvement. You must run it in normal mode for the changes to be permanently written and take effect, and then restart your system.

For more realistic results, consider additional network loads (such as multiple downloads and uploads using various protocols like HTTP, FTP, P2P, etc.); the type of additional network loads depends on how you intend to use the server or PC.

Performance differences on fast networks during simple downloads are not particularly significant.

That's why the idea is for them to be dynamic, so the kernel's network variables adapt according to the network loads that exist at any given time.

Thanks and enjoy it.

Ya, I had assumed that setting the environment variable would override the host used for the tests. I saw that it was using 1.1.1.1 instead, so I took a look at the code and realized my false assumption.

I made the changes to the files manually before testing, I rebooted before testing any changes to /boot/loader.conf. I let the program run for a while in dry-run mode and saw that the suggested values were not changing.

I ended up testing a bunch of changes based on the calomel.org guide. It is far from a rigorous scientific test but I tested with iperf3 first then got an nginx server going, with ssl, and tried downloading a file that I had uploaded over SFTP. Single-thread download wasn't too great or impressive, but using aria2c with segmented downloading had the file downloaded super quick.
 
I made the changes to the files manually before testing, I rebooted before testing any changes to /boot/loader.conf. I let the program run for a while in dry-run mode and saw that the suggested values were not changing.
The Network Autotuner was conceived as a mechanism for dynamic adjustment of kernel networking parameters in real time, particularly under high network load conditions where static sysctl values are insufficient.

Primary purpose:
• Monitor live network metrics: latency (RTT), packet loss, jitter, throughput, interrupt queue drops, out‑of‑order segments, FIN‑WAIT states, NIC speed.​
• Apply immediate and persistent changes to kernel variables (sysctl, loader.conf) to optimize performance.​
• Automatically adapt to changing traffic conditions without manual intervention, ensuring stability and efficient bandwidth utilization.​

Key components:
• Dynamic TCP stack selection:​
◦ FreeBSD -> default stack, robust under normal conditions.​
◦ BBR -> optimized for high capacity and low latency.​
◦ RACK -> effective in networks with reordering or packet loss.​
• Congestion control algorithms available:​
◦ Cubic -> modern standard, balanced performance.​
◦ CHD / HD -> resilient under moderate loss.​
◦ HTCP -> suited for high‑capacity, long‑RTT links.​
◦ DCTCP -> efficient in ECN‑enabled environments.​
◦ CDG -> designed for high jitter scenarios.​
◦ Vegas -> excellent for low latency and zero loss.​
• Dynamic buffer and queue tuning:​
◦ Adjusts recvspace, sendspace, recvbuf_max, sendbuf_max based on BDP (Bandwidth‑Delay Product).​
◦ Adapts intr_queue_maxlen when drops are detected.​
◦ Tunes initcwnd_segments according to link stability.​
◦ Enables/disables TSO, SACK, delayed ACK, keepalive depending on jitter and loss.​

In summary:
The Network Autotuner is not a static configuration script, but an adaptive system that:​
• Continuously observes real‑time network conditions.​
• Selects the most appropriate TCP stack and congestion control algorithm.​
• Adjusts buffers, queues, and critical kernel parameters to maintain optimal performance under heavy load.​

Essentially, it acts as an automatic orchestrator of networking tunables, ensuring the system remains efficient and stable even in adverse traffic scenarios.
 
The Network Autotuner was conceived as a mechanism for dynamic adjustment of kernel networking parameters in real time, particularly under high network load conditions where static sysctl values are insufficient.


Thanks for the excellent write-up, it is insightful and I am sure it will be of benefit to others to find this thread.
 
New release - v2.2 — 2026-05-09

- Dynamic PF optimization mode: automatically switches between conservative, normal, and aggressive.
- Dynamic PF scrub rules: according to network conditions.
- Initial backup check for pf.conf: creates a single persistent backup (pf.conf.bak) only if none exists.

Try FBSD-Net-Autotuner
 
Back
Top