Well, I don't know what is practical for You. And I do not yet see the full picture: is it the
number of snapshots that is problematic, is it the
frequency they get created/deleted, is it the
size of the filesystems? Dunno.
Currently, I have remote disk mirroring active - that does use only a dozen snapshots, but they span all data and get created/deleted every few minutes. And the behaviour is far from troublesome, but still remarkably ugly.
For now, I would consider snapshots a valuable resouce that brings along certain expenses.
I am currently experimenting with limiting the arc_max, but then, my snapshots do reach only the hundreds at max, and I never had an actual out-of-memory situation. OTOH, I still have early-swapping enabled (
vm.swap_idle_enabled=1
, from earlier times with scarce memory), and that makes the OS push data out to swap
before memory gets low. That might also be worth trying - but I will switch it off now, I don't like the browser to first have to climb out of swap when I come back after an hour or so.
And then, there is something I don't understand (from
top() output):
Code:
Mem: 963M Active, 353M Inact, 827M Laundry, 5385M Wired, 102M Buf, 317M Free
ARC: 3110M Total, 431M MFU, 2326M MRU, 3432K Anon, 102M Header, 253M Other
2350M Compressed, 4676M Uncompressed, 1.99:1 Ratio
who is to be accounted for difference of 5385M - 3110M?
It currently appears to me that limiting the arc_max may not even tackle the actual issue, because memory gets claimed
outside of the ARC (but still within wired
kernel memory). This memory gets reclaimed when user processes are in need of mem, but
only then.
It probably can be figured out who is doing that, but, uuh, that looks like work.