Can't connect with SSH after changing net.inet.udp.recvspace

When increasing net.inet.udp.recvspace on a server (FreeBSD 13.2), I noticed I can no longer SSH into that server if I set the value to 4 MB.
The current SSH session is kept alive, but if I inititate another simultaneous ssh connection, I get the following error message:
Code:
kex_exchange_identification: Connection closed by remote host
Connection closed by 172.31.29.181 port 22
Also, if I close the current SSH session, I am locked out of the server.
In /var/log/messages, I notice the following:
Code:
Jan 11 16:57:23 server-1 sshd[14120]: fatal: bad addr or host: <NULL> (Name does not resolve)
If I decrease net.inet.udp.recvspace back to 1 MB, SSH works as usual. Does anyone know why this is happening?

The command I used to increase/decrease udp socket buffer space:
sysctl net.inet.udp.recvspace=value
Server specs:
Code:
FreeBSD server-1 13.2-RELEASE-p8 FreeBSD 13.2-RELEASE-p8 GENERIC amd64
 
As written, this sounds as though the problem only starts when you've increased net.inet.udp.recvspace up to 4MB, and is only alleviated when you return it to its starting point of 1MB. Can you report what happens when the value is set between 1 and 4MB?

Also, the fact that sshd is reporting error this sounds like a clue:

fatal: bad addr or host: <NULL> (Name does not resolve)

As documented in sshd_config(5), sshd will by default try to resolve the host name of an incoming client. Maybe changing that tunable somehow causes this to break, and sshd's response is to not continue the connection?

If you turn off client hostname resolution in sshd_confg, and turn that tunable back to 4MB, are you still unable to connect?
 
I am noticing the issue once the net.inet.udp.recvspace is set to anything equal to or greater than 1.86 MB.

After setting UseDNS to no in sshd_config and restarting sshd and setting a net.inet.udp.recvspace of 4MB, I am still unable to ssh on the server.

Next, I checked the ena0 interface:
# ifconfig ena0 -v
ifconfig: socket(family 2,SOCK_DGRAM): No buffer space available

It looks like I ran out of network memory buffer space. Then used netstat -m to check on network memory buffers:

Code:
3984/2376/6360 mbufs in use (current/cache/total)
0/1270/1270/1004997 mbuf clusters in use (current/cache/total/max)
0/1270 mbuf+clusters out of packet secondary zone in use (current/cache)
3982/1352/5334/502498 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/148888 9k jumbo clusters in use (current/cache/total/max)
0/0/0/83749 16k jumbo clusters in use (current/cache/total/max)
16924K/8542K/25466K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0 sendfile syscalls
0 sendfile syscalls completed without I/O request
0 requests for I/O initiated by sendfile
0 pages read by sendfile as part of a request
0 pages were valid at time of a sendfile request
0 pages were valid and substituted to bogus page
0 pages were requested for read ahead by applications
0 pages were read ahead by sendfile
0 times sendfile encountered an already busy page
0 requests for sfbufs denied
0 requests for sfbufs delayed
As far as I can tell, it doesn't look like network memory buffers are close to full.

I also found that if I increase kern.ipc.maxsockbuf to 3 MG, I am able to increase the net.inet.udp.recvspace to 2 MB and SSH on the server. Apparently, the kern.ipc.maxsockbuf must be set a little higher than the net.inet.udp.recvspace, which makes sense.
 
Back
Top