I have a C/C++ program that mmaps two files that are about 7.1GB each, running on FreeBSD 9.1-RELEASE-p3. The program reads about 25k pages per second at random from one mmap and reads/writes about 2.5k pages per second to the other mmap. I've noticed that the process image will grow to fill its RSS limit very quickly, but it will stop growing after that. This happens on both ZFS (with L2ARC) and UFS2, and valgrind indicates that it isn't a memory leak. If I comment out the lines that access the mmaped data (100% floating-point arithmetic on array elements,) or if I operate on much smaller files, this doesn't happen. I therefore assume that the process caches mmaped pages in its own process image, within the memory limits imposed by the OS. Does that sound about right?
Is this something that should be documented? Intuitively, you mmap a file to save memory, but if doing so actually causes the process to take up whatever remaining memory is available, that can be a problem. Before this I thought, "I can set my RSS limit to 8GB because I'll know when I run a program that needs that much memory," and I'm sure other people out there think something similar. I admit, though, that it was lazy of me to not set soft limits in my .bashrc.
Thanks!
Kevin Barry
Is this something that should be documented? Intuitively, you mmap a file to save memory, but if doing so actually causes the process to take up whatever remaining memory is available, that can be a problem. Before this I thought, "I can set my RSS limit to 8GB because I'll know when I run a program that needs that much memory," and I'm sure other people out there think something similar. I admit, though, that it was lazy of me to not set soft limits in my .bashrc.
Thanks!
Kevin Barry