Opening more then 32767 File Descriptors


New Member

Reaction score: 1
Messages: 7


I am trying to run software that utilises RocksDB as a data storage backend, the data structures are quite large and on Linux this software opens almost 130k files.
On FreeBSD initialisation fails because RocksDB breaks down with "Too many open files" (EMFILE) error precisely at 32768'th file.

After spending considerable time on troubleshooting I am sure that this is not user or kernel limits, those are way up, instead I believe that the way how RocksDB opens files causes FreeBSD to hit the fd > SHRT_MAX check and return the EMFILE error:

Here is the part in RocksDB code that opens files:

I am almost sure that this is not FreeBSD problem but issue with RocksDB but unfortunately I am not very familiar with FreeBSD C Development and do not know how to do this properly, I would be very thankful for advise on how to open files properly in order to avoid this limit, so I can patch RocksDB.

Thank you!


New Member

Reaction score: 1
Messages: 7

Check the sysctl(8) kern.maxfiles and kern.maxfilesperproc

Thank you, both are at around 1 million:

kern.maxfiles: 1047311
kern.maxfilesperproc: 942579

Generallty all limits are very high, this is a 128 core EPYC machine with 256G ram. Still, 32767 is the limit for FDs with this particular application ( I have made some test python scripts that have no troubles crossing this threshold.).

I have spent some time debugging the situation and I see that application bails out after performing a fdopen call with FD > 32767 which logically results in a EMFILE error according to this code:

So, what is the proper way to open more then 32767 FDs in FreeBSD?

Here is the line in RocksDB that causes problem:

Perhaps entire construct should use a single fopen() instead of a open() -> fdopen() chain?