"Too many open files" mystery - SOLVED

Hi Folks,

I am trying to run a software that has rocksdb as underlying data storage mechanism on 12.2-RELEASE-p3 amd64 and the binary crashes when trying to open more then 32768 files with RocksDB error message:
Code:
[Error : 0 : IO error: While opening file for sequentially read: /mnt/nvme/db/archive.2853800.index/CURRENT: Too many open files]

I know that process dies when 32768 are opened because I monitor sysctl kern.openfiles value.

System wide and user limits are set to defaults and currently are:

Code:
root@chive1:/ # sysctl -a | grep files
kern.maxfiles: 8381753
kern.maxfilesperproc: 7543575

runner@chive1:/mnt/nvme % limits
Resource limits (current):
  cputime              infinity secs
  filesize             infinity kB
  datasize             33554432 kB
  stacksize              524288 kB
  coredumpsize         infinity kB
  memoryuse            infinity kB
  memorylocked               64 kB
  maxprocesses            89999
  openfiles             7543575
  sbsize               infinity bytes
  vmemoryuse           infinity kB
  pseudo-terminals     infinity
  swapuse              infinity kB
  kqueues              infinity
  umtxp                infinity

I have also created a small script to open large quantity of files and it does not die at 32768.

It all points to binary itself but I spent a bit of time poking around source code and I see neither hardcoded 32768 limit, nor the message "Too many open files" in the source code. Also problem does not happen under Linux with exactly same source code.

Does anyone have ideas what this could be?

Thanks
 
Problem solved, this was binary after all. Parent binary set nolimit to 65536 and rocksdb takes half of that as a limit for itself.
 
Back
Top