UFS 13.3 UFS subdirectory (link) maximum increased to 65530

Might be of interest to someone else - seems to be a not-on-the-release-notes (probably because not something a lot of people will be concerned about!) change to 13.x.

I've got some old code that I will improve one day that creates one subdirectory per entity*, never dreaming that one day there would be 32K+ of these entities, so every now and then I hit this limit:


So every couple of years I purge older entities to get myself some room for new growth.

I'm copying this to a new ZFS-based system and I was wondering what the new limit was on ZFS. I knocked up a quick script to create subdirectories and it got to 65495 and I thought ZFS wasn't that limited.

Much confusion followed but then I realised I was running my test on UFS - but that had a limit of 32K subdirectories, didn't it?! :-/

Bit of rummaging around e.g. https://github.com/freebsd/freebsd-src/tree/main/sys/ufs/ufs

And a couple of the files (dirnode.h and inode.h) have this comment:

Increase UFS/FFS maximum link count from 32767 to 65530.

Tested on 13.2 and 13.3 and definitely almost doubled.

I had thought this sort of change wasn't possible because it would entail too-intrusive changes, but it seems to have been worked out.

More info: https://github.com/freebsd/freebsd-src/commit/35a301555bff2ac27a727c10641b7efb3f162988

* Yes, yes, I know it was a bad design but it's done for the last fifteen years and now I can leave it for another fifteen! ?

EDIT - and the code comment shows that poudriere has the same design (and that explains why the change was done):

This limit has been recently hit by the poudriere build system when doing a ports build as it needs one directory per port and the number of ports recently passed 32767.
 
I do wonder what the older kernels do when the negative number becomes 0 and how that impacts old & new kernels handling the data at that point.
 
I do wonder what the older kernels do when the negative number becomes 0 and how that impacts old & new kernels handling the data at that point.
The commit message (link above) says this:

With this change, older kernels will generally work with the bigger
counts. While it will not itself allow the link count to exceed
32767, it will have no problem working with inodes that have a link
count greater than 32767. Since it tests that i_nlink <= UFS_LINK_MAX,
counts that are bigger than 32767 will appear negative, so will
still pass the test. Of course, if they ever drop below 32767, they
will no longer be able to exceed 32767. The one issue is if the
link count ever exceeds 65535 then it will wrap to zero and the
older kernel will be none the wiser. But this corner case is likely
to be very rare since these kernels and the applications running
on them do not expect to be able to get link counts over 32767. And
over time, the use of new filesystems on older kernels will become
rarer and rarer.


I really don't know enough to know if that's reasonable or not, but should be off my 13.2 kernels over the course of this year.

Haven't checked to see if this is in 14.0 and 14.1, might give that a go.
 
Back
Top