Hello,
I am having a problem where my code will occaisionally get to a point where all new TCP socket connections are sent a FIN flag immediately after the TCP handshake. I am not exactly sure what is causing this, as I have wrappers to catch errors on all socket operations. As an experiment, I was trying to determine if some limit was being hit, so I took out the close() calls for the socket fds, and it did make it so that the socket resets went away, however, i eventually hit a point where no new connections were completed. At that point, I grep'd for some sysctl values, which are as follows:
Does anyone see anything I'm missing about the number of files/sockets open I have compared to some max values here? I thought maybe it was a memory issue, but when I ran top, I had plenty. Is there anything else I can check as to what might be causing the sockets to send the reset messages? it's kind of hard to debug because no seg fault happens or anything to trip my debugger.
I am having a problem where my code will occaisionally get to a point where all new TCP socket connections are sent a FIN flag immediately after the TCP handshake. I am not exactly sure what is causing this, as I have wrappers to catch errors on all socket operations. As an experiment, I was trying to determine if some limit was being hit, so I took out the close() calls for the socket fds, and it did make it so that the socket resets went away, however, i eventually hit a point where no new connections were completed. At that point, I grep'd for some sysctl values, which are as follows:
Code:
[jon@] /usr/lib# sysctl -a | grep sock
2 kern.ipc.maxsockbuf: 2097152
3 kern.ipc.sockbuf_waste_factor: 8
4 kern.ipc.maxsockets: 25600
5 kern.ipc.numopensockets: 1363
6 net.inet.ip.mcast.maxsocksrc: 128
7 net.inet.tcp.syncache.rst_on_sock_fail: 1
8 net.inet6.ip6.mcast.maxsocksrc: 128
9 security.jail.param.allow.socket_af: 0
10 security.jail.param.allow.raw_sockets: 0
11 security.jail.allow_raw_sockets: 0
12 security.jail.socket_unixiproute_only: 1
13 [jon@] /usr/lib# sysctl -a | grep thread
14 kern.cam.ctl.block.num_threads: 14
15 kern.geom.eli.threads: 0
16 kern.threads.max_threads_hits: 0
17 kern.threads.max_threads_per_proc: 2000
18 vm.stats.vm.v_kthreadpages: 0
19 vm.stats.vm.v_kthreads: 24
20 vfs.nfsd.minthreads: 1
21 vfs.nfsd.maxthreads: 1
22 vfs.nfsd.threads: 0
23 net.isr.numthreads: 1
24 net.isr.bindthreads: 0
25 net.isr.maxthreads: 1
26 [jon@] /usr/lib# sysctl -a | grep file
27 kern.maxfiles: 49312
28 kern.bootfile: /boot/kernel/kernel
29 kern.maxfilesperproc: 18000
30 kern.openfiles: 7758
31 kern.corefile: %N.core
32 kern.filedelay: 30
33 debug.softdep.jwait_filepage: 1255
34 debug.softdep.write.freefile: 0
35 debug.softdep.current.freefile: 0
36 debug.softdep.total.freefile: 116205
37 hw.snd.latency_profile: 1
38 p1003_1b.mapped_files: 200112
39 [jon@] /usr/lib# sysctl -a | grep proc
40 kern.maxproc: 10000
41 kern.maxfilesperproc: 18000
42 kern.maxprocperuid: 9000
43 kern.shutdown.kproc_shutdown_wait: 60
44 kern.sigqueue.max_pending_per_proc: 128
45 kern.threads.max_threads_per_proc: 2000
Does anyone see anything I'm missing about the number of files/sockets open I have compared to some max values here? I thought maybe it was a memory issue, but when I ran top, I had plenty. Is there anything else I can check as to what might be causing the sockets to send the reset messages? it's kind of hard to debug because no seg fault happens or anything to trip my debugger.