I have a FreeBSD (64 bit) 8.3 file server using ZFS with 16 GB RAM. At the moment, vfs.zfs.arc_max is not tuned. So it is by default set by FreeBSD to about 1 GB less than the default vm.kmem_size of 15.4 GB. There are two pools with 20 TB and 64 TB of usable space with own cache and log devices on each pool. I recently discovered that I am able to stall the server (won't even respond to ping) with long
From the FreeBSD forum, what I understood initially was to set it like this - vfs.zfs.arc_max + RAM needed for other running applications should be less than or equal to vm.kmem_size. But when I searched a little further, there are opinions expressed by people that vfs.zfs.arc_max is not an exact hard limit of ARC size. It is more of a ballpark number when crossed, FreeBSD would start a thread or two to start flushing data to the disk till it reaches vfs.zfs.arc_min. In the mean time, if the writing is continuing faster than what FreeBSD could flush into the disk, the ARC size will pass vfs.zfs.arc_max and could potentially use up all the available memory causing a stall. Did I understand this correctly? If I did, vfs.zfs.arc_max is roughly like vm.dirty_background_ratio in Linux where the flush to disk would start. In that case, to be on the safe side the vfs.zfs.arc_max would be < half the RAM I have on the server so that there is less of a chance of the headroom between vm.kmem_size and vfs.zfs.arc_max be overrun by continuous writes.
I found opinions like this -
This supports what I initially understood from the forum and kind of assumes that vfs.zfs.arc_max is a hard set limit and ARC size won't cross this limit. (http://forums.freebsd.org/showthread.php?t=38979&highlight=vfs.zfs.arc_max)
The use case here is for HPC and there are 15 NFS clients doing reads and writes at the same time when lots of users are running their compute jobs. If Sebulon's opinion is correct, then vfs.zfs.arc_max could be set between 10 and 11 GB. If not, I will go with setting vfs.zfs.arc_max to 6 GB or less and hope that the 7 - 8 GB headroom I have won't get overrun. With not much load, i have just over 2 GB in free memory pages. This drops quite fast once the
Thanks.
rsync
s from an NFS client. It turns out that the server is running out of RAM when looking at the free memory pages ( sysctl vm.vmtotal
) There is a sharp drop soon after the transfer starts. So I started digging on guidelines to reduce vfs.zfs.arc_max. From the FreeBSD forum, what I understood initially was to set it like this - vfs.zfs.arc_max + RAM needed for other running applications should be less than or equal to vm.kmem_size. But when I searched a little further, there are opinions expressed by people that vfs.zfs.arc_max is not an exact hard limit of ARC size. It is more of a ballpark number when crossed, FreeBSD would start a thread or two to start flushing data to the disk till it reaches vfs.zfs.arc_min. In the mean time, if the writing is continuing faster than what FreeBSD could flush into the disk, the ARC size will pass vfs.zfs.arc_max and could potentially use up all the available memory causing a stall. Did I understand this correctly? If I did, vfs.zfs.arc_max is roughly like vm.dirty_background_ratio in Linux where the flush to disk would start. In that case, to be on the safe side the vfs.zfs.arc_max would be < half the RAM I have on the server so that there is less of a chance of the headroom between vm.kmem_size and vfs.zfs.arc_max be overrun by continuous writes.
I found opinions like this -
Sebulon said:With 16GB of RAM, about 15 is really available, consider what other applications you have and make a rough guess how much RAM they´d like, lets say another 4, so you take 15-4=11 should end you up with vfs.zfs.arc_max="11G". Comment out the rest and see your performance shoot through the roof.
This supports what I initially understood from the forum and kind of assumes that vfs.zfs.arc_max is a hard set limit and ARC size won't cross this limit. (http://forums.freebsd.org/showthread.php?t=38979&highlight=vfs.zfs.arc_max)
The use case here is for HPC and there are 15 NFS clients doing reads and writes at the same time when lots of users are running their compute jobs. If Sebulon's opinion is correct, then vfs.zfs.arc_max could be set between 10 and 11 GB. If not, I will go with setting vfs.zfs.arc_max to 6 GB or less and hope that the 7 - 8 GB headroom I have won't get overrun. With not much load, i have just over 2 GB in free memory pages. This drops quite fast once the
rsync
starts from the NFS client. In vmstat -m
, it's the Solaris line that increases in memory usage when the rsync
starts. Any opinion on this is much appreciated.Thanks.