What price do you pay when the system runs out of vnodes?

I'm trying to understand kern.maxvnodes. The Tuning Kernel Limits section of the Handbook says In some cases where disk I/O is a bottleneck and the system is running out of vnodes, this setting needs to be increased. The amount of inactive and free RAM will need to be taken into account. Indeed, on my system, which only has 512 MB RAM and uses a microSD card for its disk drive, this is exactly what happens.

That is, when the system is freshly rebooted, the vnode count is well below the default limit:

Code:
# sysctl vfs.numvnodes kern.maxvnodes
vfs.numvnodes: 5051
kern.maxvnodes: 16493

...and it creeps up throughout the day, as my fairly light load of mail, web, and database traffic is handled. Eventually it tops out at the kern.maxvnodes limit and stays there, within 1 or 2.

However I don't really know—and would like to understand—what is the penalty for maxing out the vnodes? And how high can I reasonably raise the limit? (I raised it in increments up to 48000 and am still maxing out.)
 
That's only half the story. There is a third value: vfs.freevnodes
So if you have
kern.maxvnodes - vfs.numvnodes + vfs.freevnodes < vfs.wantfreevnodes
then there is probably a bottleneck.
But then this is a question of memory distribution. And if you don't have ample free memory to distribute, then the whole optimization is proabaly quite pointless.

BTW: One can probably tune this for a server. It is rather useless to tune it on a desktop - after 20 min of operation it looks like that, and that wouldn't change with increasing it:
Code:
pmc@disp:515:1~$ /sbin/sysctl -a | grep vnodes
kern.maxvnodes: 200789
vfs.numvnodes: 200789
vfs.freevnodes: 166052
vfs.wantfreevnodes: 50197
vfs.vnodes_created: 439337
On a server we have hundreds of thousands of similar operations. So if we manage to optimize these to save 10 µs each, then that pays of. But on a desktop we have a highly individualized load pattern, and the first bigger operation (like doing a find over the filesystem) will drive such configs to their limit - but that doesn't matter, because that's a one-time usage pattern.
 
I have observed that after booting up a system, vfs.numvnodes climbs close to kern.maxvnodes and more or less stays there. It is also interesting to note that vfs.freevnodes also follows vfs.numvnodes. The grafana plots from two systems are attached. In one system, we can clearly see the effect of periodic daily cron job (at 03:00 hrs) running the find command over the entire filesystem.

The first system has adjusted kern.maxvnodes (32M) and the second one has the default setting (4M). I was told that the reason for increasing kern.maxvnodes on the first system was because vfs.numvnodes was close to kern.maxvnodes. But after increasing kern.maxvnodes, vfs.numvnodes has also increased to fill the gap.
 

Attachments

  • Screenshot 2021-12-20 at 08.29.49.png
    Screenshot 2021-12-20 at 08.29.49.png
    100.1 KB · Views: 61
  • Screenshot 2021-12-20 at 08.31.35.png
    Screenshot 2021-12-20 at 08.31.35.png
    64.1 KB · Views: 63
There is a difference between active vnodes and free vnodes.

Free vnodes can be reclaimed by the kernel when a new one is needed. But the VM system works a bit different from linux here. A lot, actually. It keeps known content of a file mapped in that vnode, so that after closeing the file the content stays around. Opening the same file again will immediately make the known content available again.

The problems only start when no free vnodes are around. Then opening a (new) file gets difficult.
 
Back
Top