Freebsd 11.1 ZFS using to much wired memory

ZFS likes memory, and lots of it. If you're struggling with memory issues you can limit the amount of ARC ZFS uses by setting vfs.zfs.arc_max in /etc/sysctl.conf.
 
ZFS likes memory, and lots of it. If you're struggling with memory issues you can limit the amount of ARC ZFS uses by setting vfs.zfs.arc_max in /etc/sysctl.conf.


SirDice
I see but is it normal for the ZFS to used the wired memory instead of the physical memory that I put in? and take a while before it releases it.


Code:
CPU:  0.1% user,  0.0% nice,  0.1% system,  0.0% interrupt, 99.7% idle
Mem: 20G Active, 83G Inact, 140M Laundry, 20G Wired, 1572M Buf, 1626M Free
ARC: 15G Total, 76K MFU, 15G MRU, 16K Anon, 27M Header, 397K Other
     15G Compressed, 15G Uncompressed, 1.00:1 Ratio
Swap: 128G Total, 128G Free

may wired is 20GB and it has been 5 days it has that and didn't release it until the system lock when it reaches 52GB.
 
Gelo,

From an end users' perspective, there isn't any correlation between Wired and Physical memory. They are both memory.
The representation of "wired" memory is simply a representation of memory that cannot be swapped to disk.
This is due to a userland process calling mlock(2) or the kernel itself preventing paging.

As SirDice has already stated, ZFS likes memory, and it is _normal_ for zfs to allocate pages of memory that it doesn't want swapped to disk. (Hence "wired")
It's also _normal_, in the case of ZFS that these pages of memory stay in use for as long as ZFS needs.
There's no sense in "premature deallocation" of pages that ZFS might need to use in the future.

The `top` snippet you provided seems perfectly fine.
 
is it normal for a ZFS to use a large amount of wired memory and took a while before it releases it?

No, of course it's not normal: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=187594.

As SirDice has already stated, ZFS likes memory, and it is _normal_ for zfs to allocate pages of memory that it doesn't want swapped to disk. (Hence "wired")
It's also _normal_, in the case of ZFS that these pages of memory stay in use for as long as ZFS needs.
There's no sense in "premature deallocation" of pages that ZFS might need to use in the future.

In my (desktop) experience, without setting vfs.zfs.arc_max to some specific limit, ZFS gobbles memory until there is nothing left for running actual applications. I fail to see what's "normal" about that.
 
No, of course it's not normal: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=187594.



In my (desktop) experience, without setting vfs.zfs.arc_max to some specific limit ZFS gobbles memory until there is nothing left for running actual applications. I fail to see what's "normal" about that.


shkhln so it will be best to modify vfs.zfs.arc_max for this situation, cause when my wired memory reaches 52GB the system lock or is frozen and has to restart it again.
 
The amount of memory used for ARC by default is (IIRC) 100% minus 1GB; however it is also dynamic, which means if something needs memory that is being used for ARC, ZFS gives it (but does not work well for some).
 
The amount of memory used for ARC by default is (IIRC) 100% minus 1GB; however it is also dynamic, which means if something needs memory that is being used for ARC, ZFS gives it (but does not work well for some).


will adjusting the vfs.zfs.arc_max help?
 
What's "best" is different for different people. My suggestion is ignoring it altogether unless you have a specific problem to solve. ZFS using as much memory as possible is not a problem unless something else is effected.

The advice provided by shkhln to set vfs.zfs.arc_max is also fine.

https://wiki.freebsd.org/ZFSTuningGuide

If you are truly concerned about ZFS's memory usage that page will give you pointers to clamp down on the ARC cache.
 
What's "best" is different for different people. My suggestion is ignoring it altogether unless you have a specific problem to solve. ZFS using as much memory as possible is not a problem unless something else is effected.

The advice provided by shkhln to set vfs.zfs.arc_max is also fine.

https://wiki.freebsd.org/ZFSTuningGuide

If you are truly concerned about ZFS's memory usage that page will give you pointers to clamp down on the ARC cache.


Ok thank you, the whole system is really affected when the wired memory reached 52GB the system is already frozen and I will need to reboot it.
 
Ok thank you, the whole system is really affected when the wired memory reached 52GB the system is already frozen and I will need to reboot it.
You are giving us here very little to work with. Could you please give us machine's physical specs, purpose of the hardware, and the size of ZFS pools. One of my main file servers with multiple ZFS pools totaling 250 TB is rock stable with 128 GB of RAM. I have no less than 50 NFS clients leaching on that thing at any given moment.
 
It is really stupid when zfs cache steals all memory so that user programs starve off from memory and even have to swap for useless caching.
There is no way except restricting its memory usage manually.
shkhln is completely right, this is unacceptable behavior, at least very annoying when using FreeBSD as desktop machine.
 
You are giving us here very little to work with. Could you please give us machine's physical specs, purpose of the hardware, and the size of ZFS pools. One of my main file servers with multiple ZFS pools totaling 250 TB is rock stable with 128 GB of RAM. I have no less than 50 NFS clients leaching on that thing at any given moment.

I have 32TB on ZFS
128GB Memory
e5 2630v4
1x256GB
I'm going to use it just a data storage server, but couldn't start with it since the server is locking up when it reached 52GB on wired memory.
 
Then set zfs arc max to, say, 40GB to have a safe margin. Note you have to set the byte count like 40000000000, sysctl is too stupid to understand things like "40G".
 
I'm going to use it just a data storage server, but couldn't start with it since the server is locking up when it reached 52GB on wired memory.
I would be suspicious of hardware issues.
Code:
last pid: 96541;  load averages:  0.70,  0.58,  0.44                                                       up 13+22:55:53  14:25:39
46 processes:  1 running, 45 sleeping
CPU:  0.0% user,  0.0% nice,  2.3% system,  0.0% interrupt, 97.7% idle
Mem: 8728M Active, 24G Inact, 58G Wired, 3631M Free
ARC: 32G Total, 2632M MFU, 27G MRU, 18M Anon, 1843M Header, 432M Other
     30G Compressed, 63G Uncompressed, 2.13:1 Ratio
Swap: 8192M Total, 8192M Free
This is on a machine with 96GB of memory, no tweaking. Uptime is almost 14 days.

Some more specs:
Code:
dice@hosaka:~ % zpool list
NAME      SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
stor10k  1.09T  26.0G  1.06T         -     4%     2%  1.00x  ONLINE  -
zroot     145G  48.4G  96.6G         -    70%    33%  1.00x  ONLINE  -
Code:
dice@hosaka:~ % sudo vm list
Password:
NAME            DATASTORE       LOADER      CPU    MEMORY    VNC                  AUTOSTART    STATE
case            default         bhyveload   2      2048M     -                    Yes [2]      Running (1460)
freebsd11-img   default         uefi        1      512M      -                    No           Stopped
jenkins         default         bhyveload   4      16384M    -                    Yes [5]      Running (1898)
kdc             default         uefi        2      2048M     0.0.0.0:5900         Yes [1]      Running (1271)
lady3jane       default         uefi        2      4096M     -                    No           Stopped
sdgame01        default         grub        2      4096M     -                    No           Stopped
tessierashpool  default         bhyveload   4      8192M     -                    Yes [4]      Running (69648)
build11         stor10k         bhyveload   4      8192M     -                    No           Running (46143)
plex            stor10k         bhyveload   4      8192M     -                    Yes [6]      Running (42492)
wintermute      stor10k         bhyveload   4      8192M     -                    Yes [3]      Running (24523)
 
Then set zfs arc max to, say, 40GB to have a safe margin. Note you have to set the byte count like 40000000000, sysctl is too stupid to understand things like "40G".

Not true. You can use K, M, G, T suffixes with a lot of sysctl/loader settings. Unfortunately, the ZFS-related ones don't. It seems the parsing of suffixes is done on a per-tunable basis, and no one added it to the ZFS ones. :(
 
SirDice the server is set-up where the OS is installed on a SSD and then set the 32TB with zfs. Do you think there's a connection to it why the system eats to much-wired memory to the point it reaches memlock?
 
... you can limit the amount of ARC ZFS uses by setting vfs.zfs.arc_max in /etc/sysctl.conf.

I tried this with my system, and it had no effect (zfs-stats -a | grep -i arc, executed before and after the change, returned approx 31GB). I put the entry
Code:
vfs.zfs.arc_max="4G"
in /boot/loader.conf and then zfs-stats started showing the expected results.
Code:
vfs.zfs.arc_max                         4294967296
Note that this number is just a best-guess starting point for troubleshooting, not a recommendation of any kind.
 
Wired memory isn't normally given free, and ZFS is unconscious about the wired memory.
What ZFS is conscious about is the ARC size. And it will reduce the ARC size under certain conditions, like when there is a pageout event from the pager, or when it grows beyond arc-max (unless in certain circumstances where it cannot reduce the ARC, like when the vnode cache is configured too big - see various posts here).
But wired memory is (nowadays, mostly) managed by the UMA allocator. And that one has it's own scheme of deciding when reducing it would be appropriate. And I didn't yet look into that to understand how exactly it works, I only can see it happen (when memory gets scarce, there suddenly may be a drop on wired, without other apparent reason).
 
What if I decrease vfs.zfs.arc_max (also named vfs.zfs.arc.max in FreeBSD 13.0) on a running machine, will the memory already wired by zfs before that moment be wasted/leaked forever?
 
Back
Top