Side effects from reducing kern.maxvnodes?

I am running FreeBSD 10.4 on an Atmel ARM9 AT91SAM9G20 processor for a remote sensing project. When I run file intensive commands like find, scp, or git the system will eventually freeze and hang forever. When I reduced kern.maxvnodes from 4149 to 1000, git stopped crashing my system. Are there any harmful side effects that reducing maxvnodes might have on system operation?

Thanks!
 
when it freezes, does it dump anything to the console? you might be running the kernel out of memory with too many open files, since lowering maxvnodes "fixes" it. running out of kernel memory is generally fatal to the continued operation of the kernel ;)

you might try rebooting one of these devices and check vfs.numvnodes after the app's gotten warmed up a bit. probably shouldn't lower the max any further than that.
 
when it freezes, does it dump anything to the console? you might be running the kernel out of memory with too many open files, since lowering maxvnodes "fixes" it. running out of kernel memory is generally fatal to the continued operation of the kernel ;)

you might try rebooting one of these devices and check vfs.numvnodes after the app's gotten warmed up a bit. probably shouldn't lower the max any further than that.
If I run one of the commands that crash it, such as git or find, vfs.numvnodes will spike before it crashes. I think the highest number I saw before it was unresponsive was 2600.

Unfortunately there is nothing displayed when it freezes. ssh and the serial console both just hang forever
 
yeah, that smells like kernel memory exhaustion. the risks of lowering this tunable would be that your userland can't open more than that number of file descriptors. for an embedded system with low RAM (how did you get 10.4 on ARM anyway? did you port it?) a reasonable limit should prevent these lockups.
 
yeah, that smells like kernel memory exhaustion. the risks of lowering this tunable would be that your userland can't open more than that number of file descriptors. for an embedded system with low RAM (how did you get 10.4 on ARM anyway? did you port it?) a reasonable limit should prevent these lockups.
I don't know how the previous developer set it up.

So basically the main risk from reducing maxvnodes is that I am limiting the number of concurrent tasks that can occur?
 
So what actually happens when you lower it and run the disk intensive commands?

Do the commands bail out with error messages or not?
When I reduce maxvnodes, the disk intensive tasks run correctly and do not cause the system to hang. I haven't gotten a chance to run a full functional test on the system to see if any other behavior is impacted. Also it's hard to comprehensively test all of the functionality of the system in every case, so I was trying to figure out if there are any subtle issues I should look for or that might mess up data collect later down the road.


To potentially complicated things: one small issue that did occur - which it's hard to say if this is due to the disk intensive tasks or the reduction in maxvnodes. When I reduce maxvnodes, then run a disk intensive task on /dir1, the task will run correctly. However, when I go to shutdown the system it hangs for 90 seconds on unmounting /dir2 until the shutdown watchdog kicks in. /dir2 is a read-only, nandfs. I was able to solve the issue by adding umount /dir2 at the top of /etc/rc.shutdown
 
vnodes are the kernel-side data about file descriptors. we'd have to go look at kernel source to discover exactly what happens when they run out, but we would figure that you'd run into errno values of ENFILE, indicating the system file table has filled up. since your maxvnodes setting lets you run the kernel out of RAM, it locks up before that happens. that's also what we'd watch out for, weird errno values around file operations.
 
As long as you stay above the number absolutely needed (plus delta ofc) it will limit the code cache. FreeBSD caches on a per-file base, that is what shows up as "inactive" memory. Try having top running and check if the user memory gets exhausted. If you still have enough free memory, lowering the vnodes is of no problem. Minfree might also be a candidate for tuning. And if my memory serves me right, systat can report about vnodes in use.
 
vnodes are the kernel-side data about file descriptors. we'd have to go look at kernel source to discover exactly what happens when they run out, but we would figure that you'd run into errno values of ENFILE, indicating the system file table has filled up. since your maxvnodes setting lets you run the kernel out of RAM, it locks up before that happens. that's also what we'd watch out for, weird errno values around file operations.
Thanks I'll watch out for that. My kern.maxfiles is currently set to 1854. In idle kern.openfiles is 54 and goes up to mid 60s while running find or git. Haven't tried other tasks yet but seems like there is significant headroom
 
we must admit to being a huge fan of systat -vmstat. sure, it's crowded and looks a little silly on modern machines with 48 cores, but it's a more holistic view of the system than top
I am not completely sure how to interpret the memory output of systat -vmstat. Should I just be seeing if Free goes to 0?
Here is the output in idle:
Screenshot 2026-03-16 140410.png


Here is the output a few seconds into running find with maxvnodes at 1200
Screenshot 2026-03-16 140457.png


Here is the output a few seconds into running find with maxvnodes at 1000
Screenshot 2026-03-16 140718.png
 
As long as you stay above the number absolutely needed (plus delta ofc) it will limit the code cache. FreeBSD caches on a per-file base, that is what shows up as "inactive" memory. Try having top running and check if the user memory gets exhausted. If you still have enough free memory, lowering the vnodes is of no problem. Minfree might also be a candidate for tuning. And if my memory serves me right, systat can report about vnodes in use.
When running find with kern.maxvnodes=1000, the free memory drops very low to something like: Mem: 25M Active, 8368K Inact, 22M Wired, 1100K Cache, 14M Buf, 272K Free

It doesn't crash and the inactive, cache and buffer can be realocated if necessary, correct?

For reference at idle free is 5000-7000k.
 
I am not completely sure how to interpret the memory output of systat -vmstat. Should I just be seeing if Free goes to 0?
The interesting part for vnodes is at the middle, a bit to the right. There are still free vnode data structures left, so there is no impact on cache performance from the vnode number. That is limited by your free memory. And yes, inactive memory can be reclaimed at once.
 
Back
Top