Adjusting the ARC for ZFS on a laptop with 8GB of RAM

Hi,
I installed FreeBSD 14 on an older HP Probook 4540S laptop with 8GB RAM, and hardware-wise everything was recognized and works well. FreeBSD is the only operating system on the laptop, so I used the whole drive for ZFS. I'm thinking about whether it's a good idea to decrease the ARC value a bit. The sysutils/zfs-stats shows me this information, from which I interpret that the maximum value is around 6.83GB:

sh:
ARC Size:                               14.82%  1.01    GiB
        Target Size: (Adaptive)         15.40%  1.05    GiB
        Min Size (Hard Limit):          3.58%   250.64  MiB
        Max Size (High Water):          27:1    6.83    GiB
        Compressed Data Size:                   869.34  MiB
        Decompressed Data Size:                 1.68    GiB
        Compression Factor:                     1.98

This laptop is strictly for regular desktop use with Xfce. Occasionally, I take a ZFS snapshot, but that's pretty much everything I use from ZFS for now. Do you think it would be okay to reduce the ARC, considering I only have 8GB of RAM? I would appreciate any suggestions. Thank you.

Zoltan
 
Thank you. I tried modifying the vfs.zfs.arc_max parameter in /boot/loader.conf, but it seems to have no effect. The value remains at zero. I remember someone on this forum mentioning the existence of another line to set arc_max in a much more forceful way, but I've lost the thread and can't seem to find it anywhere.
 
HDD, SSD, or solid state hybrid?
It's an SSD drive.

gpart show
sh:
>       40  468862048  ada0  GPT  (224G)
         40     532480     1  efi  (260M)
     532520       1024     2  freebsd-boot  (512K)
     533544        984        - free -  (492K)
     534528    8388608     3  freebsd-swap  (4.0G)
    8923136  459937792     4  freebsd-zfs  (219G)
  468860928       1160        - free -  (580K)

Adding vfs.zfs.arc.max="5000000000" to /etc/sysctl.conf works. Now zfs-stats shows:
sh:
ARC Size:                               8.70%   414.89  MiB
        Target Size: (Adaptive)         9.61%   458.26  MiB
        Min Size (Hard Limit):          5.26%   250.64  MiB
        Max Size (High Water):          19:1    4.66    GiB
        Compressed Data Size:                   355.23  MiB
        Decompressed Data Size:                 730.11  MiB
        Compression Factor:                     2.06

I'll send you a private message with a link that you might find useful.
Thank you.
 
My opinions, others will agree, disagree, tell me I'm full of ****. All good.
ZFS ARC is interesting. Simplistically, it's "read cache", it grows to use almost all free memory BUT under memory pressure, ARC will be freed so another process can use the memory.
The problem is the free operation takes time, so system performance may appear to stall. Think about how Java Garbage Collection works and you get the idea.
On a typical "workstation" the usage pattern may not warrant a big read cache, but should be biased towards "having enough free RAM to open new programs/tabs quickly". That is why folks may want to limit the amount of memory ARC can use. My opinion is on a workstation, a laptop, limiting ARC may be beneficial, but it may not actually make a difference.
Install the package zfs-stats and actually look at the data. How much ARC do you actually use, how much ARC is actually serving requests, etc. That helps you understand what is going on and lets you make informed decisions around tuning.
 

Attachments

  • 1713714650841.png
    1713714650841.png
    1.2 MB · Views: 32
Thanks for all the replies. I really appreciate it. I'm still figuring out how to read these stats, but I've noticed that when the disk is used, the ARC value goes up.
Without any limit set by vfs.zfs.arc_max, it can go up to 4.32GB, as shown below. Over time, the value decreases, especially when disk usage goes back to normal (idle), but it takes some time.

sh:
ARC Summary: (HEALTHY)
        Memory Throttle Count:                  0

ARC Misc:
        Deleted:                                210.84  k
        Mutex Misses:                           36
        Evict Skips:                            0

ARC Size:                               63.09%  4.32    GiB  <---- this one is the size in use
        Target Size: (Adaptive)         63.97%  4.38    GiB
        Min Size (Hard Limit):          3.58%   251.01  MiB
        Max Size (High Water):          27:1    6.84    GiB
        Compressed Data Size:                   3.98    GiB
        Decompressed Data Size:                 4.68    GiB
        Compression Factor:                     1.18

ARC Size Breakdown:
        Recently Used Cache Size:       84.16%  3.68    GiB
        Frequently Used Cache Size:     15.84%  710.03  MiB

I'll try to lower the value to 512MB and see how it works out.
 
Does the use of memory for caching create any problems for you? OOM killings? Otherwise, the size of ARC can be ignored as it is being freed when needed. I simply doubt that there is a problem at all.
 
… it can go up …

Generally: the higher, the better.

Forcing reduction of the ARC can worsen performance.

In the second screenshot below: this morning, free memory fell to 944 M (measured by GKrellM) when I intentionally opened some relatively memory-hungry sites in Firefox, after observing an issue affecting Thunderbird.

I have seen far less free memory in the past, not a problem. Generally: the more memory used, the better.

No problem with ARC, which is not manually tuned:

Code:
% grep -i zfs /etc/sysctl.conf | grep -i arc | grep -v \#
vfs.zfs.l2arc.noprefetch=0
vfs.zfs.l2arc.write_boost=335544320
% uname -aKU
FreeBSD mowa219-gjp4-zbook-freebsd 15.0-CURRENT FreeBSD 15.0-CURRENT main-n269594-26f6c148bce2 GENERIC amd64 1500018 1500018
%

– the tuning of L2ARC in my case is to increase its use, because the two USB flash drives given to L2ARC are much faster than the HDD (ada1) that's used for FreeBSD.

Code:
% lsblk
DEVICE         MAJ:MIN SIZE TYPE                                    LABEL MOUNT
ada0             0:147 112G GPT                                         - -
  <FREE>         -:-   1.0M -                                           - -
  ada0p1         0:155 112G freebsd-zfs                           gpt/112 <ZFS>
  <FREE>         -:-   456K -                                           - -
ada1             0:138 932G GPT                                         - -
  ada1p1         0:140 260M efi                              gpt/efiboot0 /boot/efi
  <FREE>         -:-   1.0M -                                           - -
  ada1p2         0:142  16G freebsd-swap                        gpt/swap0 SWAP
  ada1p2.eli     1:24   16G freebsd-swap                                - SWAP
  ada1p3         0:144 915G freebsd-zfs                          gpt/zfs0 <ZFS>
  ada1p3.eli     0:153 915G -                                           - -
  <FREE>         -:-   708K -                                           - -
da0              0:209  29G GPT                                         - -
  <FREE>         -:-   1.0M -                                           - -
  da0p1          0:210  29G freebsd-zfs                 gpt/cache1-august <ZFS>
  <FREE>         -:-   490K -                                           - -
da1              0:229  14G GPT                                         - -
  <FREE>         -:-   1.0M -                                           - -
  da1p1          0:230  14G freebsd-zfs                 gpt/cache2-august <ZFS>
  <FREE>         -:-   1.0M -                                           - -
% geom disk list
Geom name: ada1
Providers:
1. Name: ada1
   Mediasize: 1000204886016 (932G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r3w3e6
   descr: HGST HTS721010A9E630
   lunid: 5000cca8c8f669d2
   ident: JR1000D33VPSBE
   rotationrate: 7200
   fwsectors: 63
   fwheads: 16

Geom name: cd0
Providers:
1. Name: cd0
   Mediasize: 0 (0B)
   Sectorsize: 2048
   Mode: r0w0e0
   descr: hp DVDRW  GUB0N
   ident: (null)
   rotationrate: unknown
   fwsectors: 0
   fwheads: 0

Geom name: ada0
Providers:
1. Name: ada0
   Mediasize: 120034123776 (112G)
   Sectorsize: 512
   Mode: r1w1e3
   descr: KINGSTON SV300S37A120G
   lunid: 50026b774c044e64
   ident: 50026B774C044E64
   rotationrate: 0
   fwsectors: 63
   fwheads: 16

Geom name: da0
Providers:
1. Name: da0
   Mediasize: 30943995904 (29G)
   Sectorsize: 512
   Mode: r1w1e3
   descr: Kingston DataTraveler 3.0
   ident: E0D55EA1C84FF390A9500FDA
   rotationrate: unknown
   fwsectors: 63
   fwheads: 255

Geom name: da1
Providers:
1. Name: da1
   Mediasize: 15502147584 (14G)
   Sectorsize: 512
   Mode: r1w1e3
   descr: Kingston DataTraveler 3.0
   lunname: PHISON  USB3
   lunid: 2000acde48234567
   ident: 08606E6B6446BFB138159554
   rotationrate: unknown
   fwsectors: 63
   fwheads: 255

%

Re: use of memory, I have a more interesting set of screenshots from a month ago (24th March), they'll be a better fit for future discussion elsewhere.
 

Attachments

  • 2024-04-23 06-58-58.png
    2024-04-23 06-58-58.png
    602.5 KB · Views: 18
  • 2024-04-23 07-07-43.png
    2024-04-23 07-07-43.png
    760.8 KB · Views: 18
I have a question. Is this ARC feature just like a regular disk cache implemented in kernel? Why would it be separate from kernel's caching? Or FreeBSD kernel doesn't do any caching?
 
Oh it does.
The virtual file system object cache can be seen as "inactive" memory, that is memory that is known to contain data from files. This can be re-activated the fastest. The ARC cache of ZFS came a bit later, and it keeps the data compressed in memory. It also has more advanced algorithms for replacing of data which the "inactive" memory does not have. That one is plain LRU, the ARC is more complex but that pays off.

The ARC also only caches what is for ZFS, the kernel caches everything else, no matter the FS, in it's inactive memory management.
 
It also has more advanced algorithms for replacing of data which the "inactive" memory does not have. That one is plain LRU, the ARC is more complex but that pays off.
My opinion/understanding is this is the most important aspect of ARC. The algorithms around how and why data falls out of the being cached are better, for specific reasons.
 
Back
Top