Solved How to increase a loop-back listen queue sizes "maxqlen" ?

Is there a solution to increase a loop-back listen queue sizes "maxqlen" ?
The loop-back address and ports are related to #Odoo ERP proxy-ed by #Nginx .

Thanks for the help and all the best.

Code:
# netstat -Lan
Current listen queue sizes (qlen/incqlen/maxqlen)
Proto Listen                           Local Address         
tcp4  0/0/128                          127.0.0.1.8170         
tcp4  0/0/64                           127.0.0.1.8070         
tcp4  0/0/4096                         *.443                 
tcp6  0/0/4096                         *.443                 
tcp6  0/0/4096                         *.80                   
tcp4  0/0/4096                         *.80                   
tcp4  0/0/3344                         127.0.0.1.5432         
tcp6  0/0/3344                         ::1.5432               
tcp4  0/0/8192                         *.9898                 
tcp4  0/0/8192                         127.0.0.1.9999         
tcp6  0/0/8192                         ::1.9999               
tcp4  0/0/8192                         *.139                 
tcp4  0/0/8192                         *.445                 
tcp6  0/0/8192                         *.139                 
tcp6  0/0/8192                         *.445                 
tcp4  0/0/128                          *.22                   
tcp6  0/0/128                          *.22                   
unix  0/0/3344                         /var/run/postgres/.s.PGSQL.5432
unix  0/0/8192                         /var/run/pgpool/.s.PGSQL.9898
unix  0/0/8192                         /var/run/pgpool/.s.PGSQL.9999
unix  0/0/8192                         /var/run/samba4/ncalrpc/np/winreg
unix  0/0/8192                         /var/run/samba4/ncalrpc/DEFAULT
unix  0/0/8192                         /var/run/samba4/ncalrpc/np/srvsvc
unix  0/0/8192                         /var/run/samba4/ncalrpc/np/netlogon
unix  0/0/8192                         /var/run/samba4/ncalrpc/np/lsarpc
unix  0/0/8192                         /var/run/samba4/ncalrpc/np/lsass
unix  0/0/8192                         /var/run/samba4/ncalrpc/np/samr
unix  0/0/8192                         /var/run/samba4/ncalrpc/np/netdfs
unix  0/0/8192                         /var/run/samba4/ncalrpc/np/wkssvc
unix  0/0/8192                         /var/run/samba4/ncalrpc/np/svcctl
unix  0/0/8192                         /var/run/samba4/ncalrpc/np/ntsvcs
unix  0/0/8192                         /var/run/samba4/ncalrpc/np/plugplay
unix  0/0/8192                         /var/run/samba4/ncalrpc/np/eventlog
unix  0/0/8192                         /var/run/samba4/ncalrpc/np/initshutdown
unix  0/0/8192                         /var/run/samba4/nmbd/unexpected
unix  0/0/128                          /var/run/dbus/system_bus_socket
unix  0/0/4                            /var/run/devd.pipe
unix  0/0/4                            /var/run/devd.seqpacket.pipe
 
Why do you think it's necessary to make them the same?
Not specially make them the same.
But have control over the the loop-back 127.0.0.1.(8170|8070) listen queue sizes.
And after that do some test and benchmark, to see the impact on the load and responsiveness of the web server.
 
It's determined by the application when it makes a call to listen(2). Therefore look at the documentation relevant to the applications listening on 127.0.0.1:8170/8070 to see if there is a configuration item for that.

Having said that, if you are not experiencing exhaustion of those queues, increasing the size is wasting resources and won't gain you anything.
 
As I said, The loop-back address and ports {127.0.0.1.(8170|8070)} are related to #Odoo ERP proxy-ed by #Nginx.
For me the goal of increasing the listen queue sizes, is to do some test and benchmark, and see the impact on the load and responsiveness of the web server.

But why to increase a loop-back 127.0.0.1 (8170|8070) listen queue sizes, where the traffic of HTTP/HTTPS (*.443 | *.80) 0/0/4096 incoming to the loop-back address, is already queued by Nginx ?. And doing that is terribly wrong.

My question to me, is how to be sure that the incoming load is distributed between worker processes.

The answer and the solution come with a directive of Nginx reuseport.
The reuseport parameter instructs NGINX to create an individual listening socket for each worker process. This allows the kernel to distribute incoming connections between worker processes to handle multiple packets being sent between client and server. The reuseport feature works only on Linux kernels 3.9 and higher, DragonFly BSD, and FreeBSD 12 and higher.
By applying reuseport in the Nginx server bloc like so :

Code:
server {

        # Nginx listening on port 80 and redirect traffic to port 443
        listen 80 default_server accept_filter=httpready backlog=4096 reuseport so_keepalive=on;
        listen [::]:80 default_server accept_filter=httpready backlog=4096 reuseport so_keepalive=on;

        .......
        }

server {
        # Nginx listening on port 443 - HTTP-2
        listen [::]:443 default_server accept_filter=dataready ssl http2 backlog=4096 reuseport so_keepalive=on;
        listen 443 default_server ssl accept_filter=dataready http2 backlog=4096 reuseport so_keepalive=on;
   
        .......
        }

Change after applying reuseport to Nginx :
Code:
# siege -b -c 250 -r 50 -q https://odoo12ce-erp/

{
        "transactions":                         1156,
        "availability":                        49.74,
        "elapsed_time":                        39.32,
        "data_transferred":                   164.44,
        "response_time":                        3.15,
        "transaction_rate":                    29.40,
        "throughput":                           4.18,
        "concurrency":                         92.74,
        "successful_transactions":              1156,
        "failed_transactions":                  1168,
        "longest_transaction":                 12.24,
        "shortest_transaction":                 0.00
}

# wrk -t8 -c250 -d30s -H"User-Agent: wrk" https://odoo12ce-erp/
Running 30s test @ https://odoo12ce-erp/
  8 threads and 250 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   292.39ms  358.77ms   1.97s    85.46%
    Req/Sec    46.95     32.70   175.00     67.92%
  10370 requests in 30.19s, 11.91MB read
  Socket errors: connect 0, read 0, write 0, timeout 376
  Non-2xx or 3xx responses: 10118
Requests/sec:    343.46
Transfer/sec:    403.81KB

# netstat -Lan
Current listen queue sizes (qlen/incqlen/maxqlen)
Proto Listen                           Local Address       
tcp4  0/0/128                          127.0.0.1.8170       
tcp4  0/0/64                           127.0.0.1.8070       
tcp4  0/0/3344                         127.0.0.1.5432       
tcp6  0/0/3344                         ::1.5432             
tcp4  0/0/8192                         *.9898               
tcp4  0/0/4096                         *.443               
tcp4  0/0/4096                         *.443               
tcp4  0/0/4096                         *.443               
tcp6  0/0/4096                         *.443               
tcp6  0/0/4096                         *.443               
tcp6  0/0/4096                         *.443               
tcp6  0/0/4096                         *.80                 
tcp6  0/0/4096                         *.80                 
tcp6  0/0/4096                         *.80                 
tcp4  0/0/4096                         *.80                 
tcp4  0/0/4096                         *.80                 
tcp4  0/0/4096                         *.80                 
tcp4  0/0/4096                         *.443               
tcp6  0/0/4096                         *.443               
tcp6  0/0/4096                         *.80                 
tcp4  0/0/4096                         *.80                 
tcp4  0/0/8192                         127.0.0.1.9999       
tcp6  0/0/8192                         ::1.9999             
tcp4  0/0/8192                         *.139               
tcp4  0/0/8192                         *.445               
tcp6  0/0/8192                         *.139               
tcp6  0/0/8192                         *.445               
tcp4  0/0/128                          *.22                 
tcp6  0/0/128                          *.22                 
unix  0/0/3344                         /var/run/postgres/.s.PGSQL.5432
unix  0/0/8192                         /var/run/pgpool/.s.PGSQL.9898
unix  0/0/8192                         /var/run/pgpool/.s.PGSQL.9999
unix  0/0/8192                         /var/run/samba4/ncalrpc/np/winreg
unix  0/0/8192                         /var/run/samba4/ncalrpc/DEFAULT
unix  0/0/8192                         /var/run/samba4/ncalrpc/np/srvsvc
unix  0/0/8192                         /var/run/samba4/ncalrpc/np/netlogon
unix  0/0/8192                         /var/run/samba4/ncalrpc/np/lsarpc
unix  0/0/8192                         /var/run/samba4/ncalrpc/np/lsass
unix  0/0/8192                         /var/run/samba4/ncalrpc/np/samr
unix  0/0/8192                         /var/run/samba4/ncalrpc/np/netdfs
unix  0/0/8192                         /var/run/samba4/ncalrpc/np/wkssvc
unix  0/0/8192                         /var/run/samba4/ncalrpc/np/svcctl
unix  0/0/8192                         /var/run/samba4/ncalrpc/np/ntsvcs
unix  0/0/8192                         /var/run/samba4/ncalrpc/np/plugplay
unix  0/0/8192                         /var/run/samba4/ncalrpc/np/eventlog
unix  0/0/8192                         /var/run/samba4/ncalrpc/np/initshutdown
unix  0/0/8192                         /var/run/samba4/nmbd/unexpected
unix  0/0/128                          /var/run/dbus/system_bus_socket
unix  0/0/4                            /var/run/devd.pipe
unix  0/0/4                            /var/run/devd.seqpacket.pipe

Finally, after some test and benchmark with "siege" and "wrk" an HTTP bench-marking tool, I get a good distribute work load between Nginx worker processes, where Freebsd is doing a well done scheduling.
Going for a Thread Stats Avg Latency from : ~ 650ms to : ~ 300ms, with a bottleneck-ed HDD from 2012, not bad.

I hope this time I'm not doing that wrongly.
Thanks and all the best to everyone helping and advising the community.
 

Attachments

  • Freebsd and NGINX to create an individual listening socket for each worker process.png
    Freebsd and NGINX to create an individual listening socket for each worker process.png
    66.7 KB · Views: 139
  • Some test and benchmark with siege and wrk an HTTP bench-marking tool, geting a good distribut...png
    Some test and benchmark with siege and wrk an HTTP bench-marking tool, geting a good distribut...png
    199.6 KB · Views: 154
Last edited:
Back
Top