opentracker networking issues

I have encountered some networking issues running opentracker software (http://erdgeist.org/arts/software/opentracker/) on FreeBSD 8.0

Networking fails - opentracker, webserver and ssh becomes unresponsive and client software returns messages like
Code:
connection reset
Code:
server unexpectedly closed network connection
etc.

This issue occurs when opentracker starts serving more than 3 million peers, I am pretty sure it is not the software issue as I have seen running same software with much bigger peer count.

Probably issues occurs because of some networking settings by default opentracker suggests changing these limits and I did:
Code:
kern.ipc.somaxconn=1024
kern.ipc.nmbclusters=32768
net.inet.tcp.msl=10000
kern.maxfiles=10240
as it didnt help later tried to raise even more without a success:
Code:
kern.ipc.somaxconn = 10240
kern.maxfiles=20480

dmesg doesnt show anything except messages like
Code:
Limiting open port RST response from 262 to 200 packets/sec
I don't know if it's opentracker related...

So I would like to know how to determine which limits (probably networking) must be raised
netstat -m at time issue occured
Code:
22629/3741/26370 mbufs in use (current/cache/total)
10566/2532/13098/32768 mbuf clusters in use (current/cache/total/max)
10566/2490 mbuf+clusters out of packet secondary zone in use (current/cache)
38/145/183/12800 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/6400 9k jumbo clusters in use (current/cache/total/max)
0/0/0/3200 16k jumbo clusters in use (current/cache/total/max)
26941K/6579K/33520K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0/0/0 sfbufs in use (current/peak/max)
0 requests for sfbufs denied
0 requests for sfbufs delayed
1 requests for I/O initiated by sendfile
0 calls to protocol drain routines

If there are multiple commands to determine issue would like to know all at once because the only possibility to check is to put them in crontab (as ssh access becomes unavailable) and change dns entries so server gets all load and it takes a while.
 
I checked /var/log/messages and there were some messages like
Code:
SYSERR(root): makeconnection: cannot create socket: No buffer space available
Which settings determine buffer space? I would guess kern.ipc.maxsockbuf based on title but still is it right and is it only one?
 
I stumbled upon some Linux sysctl opentracker related settings but I have problems finding corresponding FreeBSD settings
Code:
net.core.rmem_max=450000
net.core.wmem_max=450000
net.core.rmem_default=450000
net.core.wmem_default=450000
net.ipv4.tcp_mem=100000 350000 450000
net.ipv4.tcp_rmem=4069 350000 450000
net.ipv4.tcp_wmem=4096 350000 450000
net.ipv4.route.flush = 1
net.core.netdev_max_backlog=30000
net.netfilter.nf_conntrack_tcp_loose=0
net.ipv4.tcp_tw_recycle=1
net.ipv4.tcp_tw_reuse=1
net.ipv4.tcp_max_orphans=5500
net.ipv4.netfilter.ip_conntrack_max = 5000000

net.core.rmem_max and net.core.wmem_max corresponds to net.inet.tcp.recvspace and net.inet.tcp.sendspace? And what about other?
 
Back
Top