Allocate RAM for zfs?

I wasn’t really sure where to put this as it is really about memory, FreeBSD, and zfs.

I’ve generally let memory manage itself on my 8gb machine, but now I’m moving to 32gb and I don’t really know how to maximize the utility of all that memory. Do I just let it manage itself or do I carve out slices for ZFS and other stuff?

What do y’all do to manage mem in FreeBSD and ZfS or do you. And, if you have a philosophy vis a vis managing mem in FreeBSD, please share.
 
My opinions only (feel free to disagree):
Memory should be used. Why have 32GB total and leave 16G free? Maybe in some use cases you want to set an upper limit on total used so root can get in and do stuff if needed, but at max 10-15% free, so on your 32GB maybe try and use no more than 24-28G.

BUT (here's the big caveat):
It depends on what the system is being used for.
A general purpose workstation, graphical environment, users, I would bias towards "user experience" implies leaving more free for applications.
Servers with no users? Let the system use all the free memory to buffer files.

How does this apply to ZFS?
By default, ZFS wants to use all your free memory to buffer. That's the ARC. Something reads a file, well, ZFS wants to do read ahead buffering and keep stuff in RAM (ARC is similar to typical file system buffers). Why do that? Well it's a heck of a lot faster to pull data from RAM than it is from a device (hard disk, SSD, NVME).
But what stays cached in ARC depends on the usage patterns (hence my distinction between servers and user workstations).

So what is Alain De Vos doing with those sysctls? Limiting the size of the ARC for ZFS; telling the OS: use at least this much (arc_min) but no more than that much (arc_max). It would leave the rest of RAM available for applications, which if a user workstation gives more RAM for applications.

For me, on systems that are primarily user workstations I set arc_max to somewhere around 4G, simply because the access patterns of files on the disks are intermittent.
On systems that serve files (say something that is the backing store for streamed video files) I don't set anything and let the system figure it out. Streaming files from RAM is faster than streaming from physical device.

BTW: I believe those values should be set in /boot/loader.conf, I could be wrong, but that's where I've always set them.

pkg install zfs-stats and get familiar with the info it provides.
 
I dislike systems that look at physical RAM installed (or physical RAM free right now) and then decide to take a major fixed chunk of it. That fails when you run several of those systems at the same time. Chrome is a very prominent one. I dunno whether Eclipse actually looks at RAM, but it sure behaves that way.

As for ZFS: "The default is all RAM less 1 GB, or one half of RAM, whichever is more". So on most real computers it only takes half. I permit that. For Chrome I have written a LD_PRELOAD library that fakes sysconf("_SC_PHYS_PAGES") so tell Chrome the machine has <n> memory pages, which I choose.

In Linux the filesystem write cache in the VM subsystem can grow to half RAM on heavy write operations. I artificially limit that. If you don't then a file write will make the OS drop pretty much all file-back unmodified pages - which includes all C and C++ written application code.
 
Hmm... food for thought. I primarily use the server to serve up my fossil repos. I used to rsync files over to the machine as backups, so I guess it'd be great to get back to being able to do that and configure the system as a remote target for rsync and as a fossil service provider.
 
Hmm... food for thought. I primarily use the server to serve up my fossil repos. I used to rsync files over to the machine as backups, so I guess it'd be great to get back to being able to do that and configure the system as a remote target for rsync and as a fossil service provider.
Based on this, I'd be inclined to not tune anything and see how it runs. zfs-stats can tell you how effective your cache is and go from there.
 
Back
Top