mbuf leakage on FreeBSD 7.3

I have two servers running FreeBSD 7.3-RELEASE (i386), named (from base), qmail (from ports) and pf (from base). They are both quad core Intel servers with 4GB of RAM and Intel (em) gigabit Ethernet NICs.

Both exhibit the same symptom where 'mbufs in use' slowly rises over a period of 30 days until eventually the server panics with a 'kmem_map too small' message.

At 10 days of uptime they look like this:
Code:
server# netstat -m
27654/3201/30855 mbufs in use (current/cache/total)
256/1796/2052/25600 mbuf clusters in use (current/cache/total/max)
256/1664 mbuf+clusters out of packet secondary zone in use (current/cache)
0/231/231/12800 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/6400 9k jumbo clusters in use (current/cache/total/max)
0/0/0/3200 16k jumbo clusters in use (current/cache/total/max)
7425K/5316K/12741K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0/6/6656 sfbufs in use (current/peak/max)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile
0 calls to protocol drain routines

At the time of the crash it looked like this:
Code:
server# netstat -m -M vmcore.0

857844/2890/860734 mbufs in use (current/cache/total)
317/2139/2456/25600 mbuf clusters in use (current/cache/total/max)
350/1603 mbuf+clusters out of packet secondary zone in use (current/cache)
0/263/263/12800 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/19200 9k jumbo clusters in use (current/cache/total/max)
0/0/0/12800 16k jumbo clusters in use (current/cache/total/max)
215098K/6052K/221151K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile
0 calls to protocol drain routines

Based on this output it does not appear that tweaking the mbufcluster size will do any good. Something in the network is just soaking up mbufs until the server runs out of kmem.

Any suggestions or ideas on how I might troubleshoot, mitigate or fix this? Most of my searching has only found issues with other applications. (i.e. NFS, ZFS)
 
I tried restarting any and all network services (i.e. named, ntpd) to see if the kernel might free the memory up. That did not work.

I guess I'm stuck raising the kmem limit up as high as reasonable and hope some future release of FreeBSD fixes the bug. -- Which sort of ticks me off since this exact configuration ran rock solid on FreeBSD4 for years on really crappy hardware. :\
 
Have you seen the mbuf recent
freebsd-security announcement ? Maybe
applying the patch and rebuilding the
kernel would help?
 
jb_fvwm2 said:
Have you seen the mbuf recent
freebsd-security announcement ? Maybe
applying the patch and rebuilding the
kernel would help?

It is on my TODO list. Although the SA's focus seems to be with sendfile and the loopback interface so I don't have high expectations.
 
Back
Top