Intensive swap usage


Aspiring Daemon

Reaction score: 625
Messages: 961

Alain De Vos
Garbage Collection. It's not always "give it enough run time", one often has to tune it for the usage patterns. I've run into cases where the bursty nature of the inputs caused problems. Steady state was fine, but bursty? Not a chance. Needed to tune GC to deal with it.



Reaction score: 788
Messages: 1,445

top -w only shows swap for swapped processes (processes that show like <cron>)
sysctl vm.overcommit=1
in default mode you can allocate whatever you want and when you try to use it you get killed
test is done on a vm with 1GB + 2GB swap
i had no problem to calloc 8GB (just got killed when tried to access it)
even the RSS in top showed 8GB
vm.overcommit: 0 -> 0
[titus@luxe ~]$ ./b
Allocated 80 * 100M chunks
reached chunk 0
reached chunk 1
reached chunk 2
reached chunk 3
reached chunk 4
reached chunk 5
reached chunk 6
reached chunk 7
[titus@luxe ~]$ sudo sysctl vm.overcommit=1
vm.overcommit: 0 -> 1
[titus@luxe ~]$ ./b
Allocated 6 * 100M chunks
reached chunk 0
reached chunk 1
reached chunk 2
reached chunk 3
reached chunk 4
reached chunk 5
the test program (b.c)
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
int main()
char *b[80];
int i = 0,j = 0;
for(i = 0;i < 80; i++) {
 b[i] = calloc(100,1000*1000);
 if(!b[i]) break;
printf("Allocated %d * 100M chunks\n",i);
for(j = 0;j < i;j++) {
 memset(b[j],44,100 * 1000 * 1000);
 printf("reached chunk %d\n",j);


Aspiring Daemon

Reaction score: 197
Messages: 786

I've left the system to work for some time and I think, that java GC was the reason of swap usage. With -Xmx set, java daemons consume only fraction of memory observed before. Plus, I've upgraded Java to version 11.

last pid: 89085;  load averages:  9,76,  8,18,  7,70  up 4+16:47:16    09:14:51
349 processes: 1 running, 348 sleeping
CPU: 32,6% user,  0,0% nice, 12,7% system,  1,0% interrupt, 53,7% idle
Mem: 7247M Active, 55G Inact, 13M Laundry, 22G Wired, 6140M Free
ARC: 4190M Total, 1761M MFU, 729M MRU, 17M Anon, 150M Header, 1532M Other
     951M Compressed, 2853M Uncompressed, 3,00:1 Ratio
Swap: 32G Total, 32G Free

The ARC doesn't fill up though.

ARC Misc:
    Deleted:                96386896
    Recycle Misses:                0
    Mutex Misses:                904670
    Evict Skips:                904670

ARC Size:
    Current Size (arcsize):        41,12%    4210,95M
    Target Size (Adaptive, c):    41,93%    4294,09M
    Min Size (Hard Limit, c_min):    29,98%    3070,49M
    Max Size (High Water, c_max):    ~3:1    10240,00M

ARC Size Breakdown:
    Recently Used Cache Size (p):    24,27%    1042,28M
    Freq. Used Cache Size (c-p):    75,72%    3251,81M

ARC Hash Breakdown:
    Elements Max:                5855744
    Elements Current:        20,90%    1224302
    Collisions:                25077261
    Chain Max:                0
    Chains:                    42520

ARC Eviction Statistics:
    Evicts Total:                985430173696
    Evicts Eligible for L2:        91,29%    899645513728
    Evicts Ineligible for L2:    8,70%    85784659968
    Evicts Cached to L2:            1752593637376

ARC Efficiency
    Cache Access Total:            4023014453
    Cache Hit Ratio:        97,43%    3919829152
    Cache Miss Ratio:        2,56%    103185301
    Actual Hit Ratio:        97,28%    3913976248

    Data Demand Efficiency:        92,53%
    Data Prefetch Efficiency:    9,67%

      Most Recently Used (mru):    3,87%    151721138
      Most Frequently Used (mfu):    95,98%    3762255110
      MRU Ghost (mru_ghost):    0,08%    3160694
      MFU Ghost (mfu_ghost):    0,17%    6999636

      Demand Data:            5,86%    229969904
      Prefetch Data:        0,06%    2648781
      Demand Metadata:        90,99%    3567022591
      Prefetch Metadata:        3,06%    120187876

      Demand Data:            17,98%    18555527
      Prefetch Data:        23,97%    24735746
      Demand Metadata:        16,31%    16833167
      Prefetch Metadata:        41,73%    43060861

L2 looks useless at all
L2 ARC Summary:
    Low Memory Aborts:            1
    R/W Clashes:                0
    Free on Write:                75439

L2 ARC Size:
    Current Size: (Adaptive)        75595,54M
    Header Size:            0,11%    90,30M

L2 ARC Evicts:
    Lock Retries:                407
    Upon Reading:                278

L2 ARC Read/Write Activity:
    Bytes Written:                376200,72M
    Bytes Read:                96269,17M

L2 ARC Breakdown:
    Access Total:                102563566
    Hit Ratio:            17,51%    17963579
    Miss Ratio:            82,48%    84599987
    Feeds:                    429657

      Sent Total:            100,00%    364929


Aspiring Daemon

Reaction score: 625
Messages: 961

Cache is mostly read, so a lot of usefulness depends on the exact usage pattern.
ZFS cache is "MRU falls to MFU to L2 ARC" (roughly). If you look at the efficiency section, you have a hit ratio of 97%, with most of it coming from MFU. Doesn't matter if it's full or not, the usage pattern indicates to me "most of your read data is reread, but not quickly, so it falls to MFU where it gets reread.".

And since your ARC is not close to full and most of the reads are satisfied from ARC/MFU, the L2 ARC is not really going to be used. So it's not "useless" but it's "not used because it's not needed".

Your findings on tweaking the GC and upgrading mirror what I've seen in the past.