Solved L2ARC is disabled. Why?

I am stumped. I've been running a FreeBSD 9 server with ZFS for 5 years and have now done a completely fresh install of FreeBSD 11. Everything (meaning the pools layout) is configured as it was on the old system (as far as it was possible), but created from scratch.

The problem that I face is that I noticed that neither log nor cache were touched according to zpool iostat -v:

Code:
logs                -      -      -      -      -      -
  gpt/log           0  1008M      0      0      0      0
cache               -      -      -      -      -      -
  gpt/cache         0   191G      0      0      0      0

Installing zfs-stats and running zfs-stats -a proved that something was off: L2ARC is shown as disabled:

Code:
------------------------------------------------------------------------
ZFS Subsystem Report                            Thu May 11 20:12:49 2017
------------------------------------------------------------------------

System Information:

        Kernel Version:                         1100122 (osreldate)
        Hardware Platform:                      i386
        Processor Architecture:                 i386

        ZFS Storage pool Version:               5000
        ZFS Filesystem Version:                 5

FreeBSD 11.0-RELEASE-p1 #0 r306420: Thu Sep 29 03:40:55 UTC 2016 root
 8:12PM  up 22:52, 3 users, load averages: 0.33, 0.37, 0.33

------------------------------------------------------------------------

System Memory:

        0.75%   25.19   MiB Active,     88.63%  2.93    GiB Inact
        8.30%   280.48  MiB Wired,      0.00%   0 Cache
        2.33%   78.63   MiB Free,       0.00%   0 Gap

        Real Installed:                         3.50    GiB
        Real Available:                 95.96%  3.36    GiB
        Real Managed:                   98.30%  3.30    GiB

        Logical Total:                          3.50    GiB
        Logical Used:                   14.20%  509.00  MiB
        Logical Free:                   85.80%  3.00    GiB

Kernel Memory:                                  112.07  MiB
        Data:                           74.31%  83.28   MiB
        Text:                           25.69%  28.79   MiB

Kernel Memory Map:                              412.00  MiB
        Size:                           39.77%  163.85  MiB
        Free:                           60.23%  248.15  MiB

------------------------------------------------------------------------

ARC Summary: (THROTTLED)
        Memory Throttle Count:                  7

ARC Misc:
        Deleted:                                35.20m
        Recycle Misses:                         0
        Mutex Misses:                           216.28k
        Evict Skips:                            186.37m

ARC Size:                               23.24%  59.84   MiB
        Target Size: (Adaptive)         12.50%  32.19   MiB
        Min Size (Hard Limit):          12.50%  32.19   MiB
        Max Size (High Water):          8:1     257.50  MiB

ARC Size Breakdown:
        Recently Used Cache Size:       42.19%  25.24   MiB
        Frequently Used Cache Size:     57.81%  34.59   MiB

ARC Hash Breakdown:
        Elements Max:                           32.44k
        Elements Current:               13.45%  4.36k
        Collisions:                             437.98k
        Chain Max:                              3
        Chains:                                 17

------------------------------------------------------------------------

ARC Efficiency:                                 13.56m
        Cache Hit Ratio:                49.01%  6.65m
        Cache Miss Ratio:               50.99%  6.91m
        Actual Hit Ratio:               48.61%  6.59m

        Data Demand Efficiency:         62.19%  3.07m

        CACHE HITS BY CACHE LIST:
          Most Recently Used:           46.64%  3.10m
          Most Frequently Used:         52.54%  3.49m
          Most Recently Used Ghost:     21.17%  1.41m
          Most Frequently Used Ghost:   39.48%  2.62m

        CACHE HITS BY DATA TYPE:
          Demand Data:                  28.72%  1.91m
          Prefetch Data:                0.00%   0
          Demand Metadata:              70.37%  4.68m
          Prefetch Metadata:            0.92%   60.84k

        CACHE MISSES BY DATA TYPE:
          Demand Data:                  16.78%  1.16m
          Prefetch Data:                0.00%   0
          Demand Metadata:              82.37%  5.70m
          Prefetch Metadata:            0.85%   58.86k

------------------------------------------------------------------------

L2ARC is disabled

------------------------------------------------------------------------


------------------------------------------------------------------------

VDEV cache is disabled

------------------------------------------------------------------------

ZFS Tunables (sysctl):
        kern.maxusers                           384
        vm.kmem_size                            432013312
        vm.kmem_size_scale                      3
        vm.kmem_size_min                        12582912
        vm.kmem_size_max                        432013312
        vfs.zfs.trim.max_interval               1
        vfs.zfs.trim.timeout                    30
        vfs.zfs.trim.txg_delay                  32
        vfs.zfs.trim.enabled                    1
        vfs.zfs.vol.unmap_enabled               1
        vfs.zfs.vol.recursive                   0
        vfs.zfs.vol.mode                        1
        vfs.zfs.version.zpl                     5
        vfs.zfs.version.spa                     5000
        vfs.zfs.version.acl                     1
        vfs.zfs.version.ioctl                   6
        vfs.zfs.debug                           0
        vfs.zfs.super_owner                     0
        vfs.zfs.sync_pass_rewrite               2
        vfs.zfs.sync_pass_dont_compress         5
        vfs.zfs.sync_pass_deferred_free         2
        vfs.zfs.zio.exclude_metadata            0
        vfs.zfs.zio.use_uma                     0
        vfs.zfs.cache_flush_disable             0
        vfs.zfs.zil_replay_disable              0
        vfs.zfs.min_auto_ashift                 9
        vfs.zfs.max_auto_ashift                 13
        vfs.zfs.vdev.trim_max_pending           10000
        vfs.zfs.vdev.bio_delete_disable         0
        vfs.zfs.vdev.bio_flush_disable          0
        vfs.zfs.vdev.write_gap_limit            4096
        vfs.zfs.vdev.read_gap_limit             32768
        vfs.zfs.vdev.aggregation_limit          131072
        vfs.zfs.vdev.trim_max_active            64
        vfs.zfs.vdev.trim_min_active            1
        vfs.zfs.vdev.scrub_max_active           2
        vfs.zfs.vdev.scrub_min_active           1
        vfs.zfs.vdev.async_write_max_active     10
        vfs.zfs.vdev.async_write_min_active     1
        vfs.zfs.vdev.async_read_max_active      3
        vfs.zfs.vdev.async_read_min_active      1
        vfs.zfs.vdev.sync_write_max_active      10
        vfs.zfs.vdev.sync_write_min_active      10
        vfs.zfs.vdev.sync_read_max_active       10
        vfs.zfs.vdev.sync_read_min_active       10
        vfs.zfs.vdev.max_active                 1000
        vfs.zfs.vdev.async_write_active_max_dirty_percent60
        vfs.zfs.vdev.async_write_active_min_dirty_percent30
        vfs.zfs.vdev.mirror.non_rotating_seek_inc1
        vfs.zfs.vdev.mirror.non_rotating_inc    0
        vfs.zfs.vdev.mirror.rotating_seek_offset1048576
        vfs.zfs.vdev.mirror.rotating_seek_inc   5
        vfs.zfs.vdev.mirror.rotating_inc        0
        vfs.zfs.vdev.trim_on_init               1
        vfs.zfs.vdev.cache.bshift               16
        vfs.zfs.vdev.cache.size                 0
        vfs.zfs.vdev.cache.max                  16384
        vfs.zfs.vdev.metaslabs_per_vdev         200
        vfs.zfs.txg.timeout                     5
        vfs.zfs.space_map_blksz                 4096
        vfs.zfs.spa_slop_shift                  5
        vfs.zfs.spa_asize_inflation             24
        vfs.zfs.deadman_enabled                 1
        vfs.zfs.deadman_checktime_ms            5000
        vfs.zfs.deadman_synctime_ms             1000000
        vfs.zfs.debug_flags                     0
        vfs.zfs.recover                         0
        vfs.zfs.spa_load_verify_data            1
        vfs.zfs.spa_load_verify_metadata        1
        vfs.zfs.spa_load_verify_maxinflight     10000
        vfs.zfs.ccw_retry_interval              300
        vfs.zfs.check_hostid                    1
        vfs.zfs.mg_fragmentation_threshold      85
        vfs.zfs.mg_noalloc_threshold            0
        vfs.zfs.condense_pct                    200
        vfs.zfs.metaslab.bias_enabled           1
        vfs.zfs.metaslab.lba_weighting_enabled  1
        vfs.zfs.metaslab.fragmentation_factor_enabled1
        vfs.zfs.metaslab.preload_enabled        1
        vfs.zfs.metaslab.preload_limit          3
        vfs.zfs.metaslab.unload_delay           8
        vfs.zfs.metaslab.load_pct               50
        vfs.zfs.metaslab.min_alloc_size         33554432
        vfs.zfs.metaslab.df_free_pct            4
        vfs.zfs.metaslab.df_alloc_threshold     131072
        vfs.zfs.metaslab.debug_unload           0
        vfs.zfs.metaslab.debug_load             0
        vfs.zfs.metaslab.fragmentation_threshold70
        vfs.zfs.metaslab.gang_bang              16777217
        vfs.zfs.free_bpobj_enabled              1
        vfs.zfs.free_max_blocks                 -1
        vfs.zfs.no_scrub_prefetch               0
        vfs.zfs.no_scrub_io                     0
        vfs.zfs.resilver_min_time_ms            3000
        vfs.zfs.free_min_time_ms                1000
        vfs.zfs.scan_min_time_ms                1000
        vfs.zfs.scan_idle                       50
        vfs.zfs.scrub_delay                     4
        vfs.zfs.resilver_delay                  2
        vfs.zfs.top_maxinflight                 32
        vfs.zfs.zfetch.array_rd_sz              1048576
        vfs.zfs.zfetch.max_distance             8388608
        vfs.zfs.zfetch.min_sec_reap             2
        vfs.zfs.zfetch.max_streams              8
        vfs.zfs.prefetch_disable                1
        vfs.zfs.delay_scale                     500000
        vfs.zfs.delay_min_dirty_percent         60
        vfs.zfs.dirty_data_sync                 67108864
        vfs.zfs.dirty_data_max_percent          10
        vfs.zfs.dirty_data_max_max              4294967296
        vfs.zfs.dirty_data_max                  360617574
        vfs.zfs.max_recordsize                  1048576
        vfs.zfs.mdcomp_disable                  0
        vfs.zfs.nopwrite_enabled                1
        vfs.zfs.dedup.prefetch                  1
        vfs.zfs.l2c_only_size                   0
        vfs.zfs.mfu_ghost_data_lsize            12904960
        vfs.zfs.mfu_ghost_metadata_lsize        11940864
        vfs.zfs.mfu_ghost_size                  24845824
        vfs.zfs.mfu_data_lsize                  0
        vfs.zfs.mfu_metadata_lsize              65536
        vfs.zfs.mfu_size                        350208
        vfs.zfs.mru_ghost_data_lsize            65024
        vfs.zfs.mru_ghost_metadata_lsize        8585728
        vfs.zfs.mru_ghost_size                  8650752
        vfs.zfs.mru_data_lsize                  0
        vfs.zfs.mru_metadata_lsize              32768
        vfs.zfs.mru_size                        24934400
        vfs.zfs.anon_data_lsize                 0
        vfs.zfs.anon_metadata_lsize             0
        vfs.zfs.anon_size                       9734144
        vfs.zfs.l2arc_norw                      1
        vfs.zfs.l2arc_feed_again                1
        vfs.zfs.l2arc_noprefetch                1
        vfs.zfs.l2arc_feed_min_ms               200
        vfs.zfs.l2arc_feed_secs                 1
        vfs.zfs.l2arc_headroom                  2
        vfs.zfs.l2arc_write_boost               8388608
        vfs.zfs.l2arc_write_max                 8388608
        vfs.zfs.arc_meta_limit                  67502080
        vfs.zfs.arc_free_target                 6050
        vfs.zfs.arc_shrink_shift                7
        vfs.zfs.arc_average_blocksize           8192
        vfs.zfs.arc_min                         33751040
        vfs.zfs.arc_max                         270008320

------------------------------------------------------------------------

Output from zfs get secondarycache zstore shows:

Code:
NAME    PROPERTY        VALUE           SOURCE
zstore  secondarycache  all             default

And for completeness, the output from zpool status

Code:
 state: ONLINE
  scan: scrub repaired 0 in 0h1m with 0 errors on Wed May 10 21:36:03 2017
config:

        NAME           STATE     READ WRITE CKSUM
        zroot          ONLINE       0     0     0
          mirror-0     ONLINE       0     0     0
            gpt/main0  ONLINE       0     0     0
            gpt/main1  ONLINE       0     0     0

errors: No known data errors


  pool: zstore
 state: ONLINE
  scan: none requested
config:

        NAME            STATE     READ WRITE CKSUM
        zstore          ONLINE       0     0     0
          raidz1-0      ONLINE       0     0     0
            label/v1d1  ONLINE       0     0     0
            label/v1d2  ONLINE       0     0     0
            label/v1d3  ONLINE       0     0     0
            label/v1d4  ONLINE       0     0     0
          raidz1-1      ONLINE       0     0     0
            label/v2d1  ONLINE       0     0     0
            label/v2d2  ONLINE       0     0     0
            label/v2d3  ONLINE       0     0     0
            label/v2d4  ONLINE       0     0     0
        logs
          gpt/log       ONLINE       0     0     0
        cache
          gpt/cache     ONLINE       0     0     0

And from gpart show -l ada6

Code:
=>       40  468862048  ada6  GPT  (224G)
         40       1024     1  boot  (512K)
       1064   33554432     2  main0  (16G)
   33555496   33554432     3  main1  (16G)
   67109928    2097152     4  log  (1.0G)
   69207080  399655000     5  cache  (191G)
  468862080          8        - free -  (4.0K)

I have tried removing the cache from the pool and readding it.

What can be the reason for L2ARC being disabled, as well as ZIL not being written to?
 
Your system seems to be under heavy memory pressure:
System Memory:

0.75% 25.19 MiB Active, 88.63% 2.93 GiB Inact
8.30% 280.48 MiB Wired, 0.00% 0 Cache
2.33% 78.63 MiB Free, 0.00% 0 Gap

[...]

ARC Summary: (THROTTLED)
Memory Throttle Count: 7

The ARC is currently throttled due to a recent memory exhaustion. Inactive memory is memory, that has just been released by a process and is still kept active. So some process that just has been stopped used all available RAM, not leaving any for ZFS to work properly.

The L2ARC will only hold data that has fallen off the ARC. To reference the data within the L2ARC, additional memory is needed (~25MB per GB of L2ARC). So for your 191GB cache you'd need at least ~4.8GB of additional RAM to actually use the L2ARC - on top of the RAM that you need for an ARC to actually have enough cached data that can be pushed to L2ARC...

The high water mark of your ARC is only 250MB, which is extremely low. I suspect the L2ARC got disabled because there just isn't enough memory available and the tiny amount available is better spent for caching of actual data instead of metadata for the slow L2ARC.


There is only one advice to give: Add more RAM! A lot of RAM! If you're done, add some more.
If your L2ARC is reasonably sized I'd guess your pool size is up in the 2-digit TB range. For a pool of this size to work properly and with decent performance, the system should have as an absolute minimum ~32GB, better >64GB of RAM. The more, the better.
 
Thank you for the in-depth reply. It looks like you nailed it, though I am yet to find out why so little RAM is available. The machine has 16GB (maximum supported) of RAM installed, just as with FreeBSD 9 installation.

However, grep memory /var/run/dmesg.boot gives the following output:

Code:
real memory  = 17179869184 (16384 MB)
avail memory = 3540398080 (3376 MB)

Here are the two memory-related lines from top:

Code:
Mem: 26M Active, 4948K Inact, 119M Wired, 3231M Free
ARC: 54M Total, 18M MFU, 34M MRU, 32K Anon, 300K Header, 1516K Other

I don't remember the reason, but on the old 9-installation I had the following commands in loader.conf:

Code:
vm.kmem_size="10240M"
vm.kmem_size_max="13824M"

After I tried to add them to the new installation, I got kernel panic and had to boot with Live memory stick to recover.
 
The system somehow only sees 3.5GB RAM:


Code:
       Real Installed:                         3.50    GiB


Code:
Mem: 26M Active, 4948K Inact, 119M Wired, 3231M Free
ARC: 54M Total, 18M MFU, 34M MRU, 32K Anon, 300K Header, 1516K Other

Is this some ancient 32bit machine without PAE support? I'm not really familiar with 32bit builds of FreeBSD since I haven't used a 32bit platform in _many_ years, but I think you have to rebuild the Kernel to support PAE.
ZFS also has some limitations on 32bit Machines, so you might want to reconsider running your storage pool on such a dinosaur...
 
Well, it's not a dinosaur - a 2012 machine with ASUS motherboard and UEFI BIOS. FreeBSD 9 on the same machine used all of the available memory, but I made a custom build of that due to some changes that I had to do to storage controller code, and may have inadvertently enabled 64-bit support at the same time.

I was not aware that the distribution build is only 32-bit. I installed from this: FreeBSD-11.0-RELEASE-i386-dvd1.iso

Should I post the memory question as a separate thread on System Hardware subforum?

EDIT: CPU status from dmesg

Code:
FreeBSD 11.0-RELEASE-p1 #0 r306420: Thu Sep 29 03:40:55 UTC 2016
    root@releng2.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC i386
FreeBSD clang version 3.8.0 (tags/RELEASE_380/final 262564) (based on LLVM 3.8.0)
VT(vga): resolution 640x480
CPU: Intel(R) Celeron(R) CPU G530 @ 2.40GHz (2400.05-MHz 686-class CPU)
  Origin="GenuineIntel"  Id=0x206a7  Family=0x6  Model=0x2a  Stepping=7
  Features=0xbfebfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE>
  Features2=0xd9ae3bf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,POPCNT,TSCDLT,XSAVE,OSXSAVE>
  AMD Features=0x28100000<NX,RDTSCP,LM>
  AMD Features2=0x1<LAHF>
  XSAVE Features=0x1<XSAVEOPT>
  VT-x: (disabled in BIOS) PAT,HLT,MTF,PAUSE,EPT,UG,VPID
  TSC: P-state invariant, performance statistics
real memory  = 17179869184 (16384 MB)
avail memory = 3540398080 (3376 MB)

PAE is listed in the features list!

EDIT 2:

Booted into FreeBSD 9 on the same machine, and here is what dmesg says on it:

Code:
FreeBSD 9.0-RELEASE #0: Thu Jun  7 20:35:46 UTC 2012
    root@sun:/usr/obj/usr/src/sys/GENERIC amd64
CPU: Intel(R) Celeron(R) CPU G530 @ 2.40GHz (2400.06-MHz K8-class CPU)
  Origin = "GenuineIntel"  Id = 0x206a7  Family = 6  Model = 2a  Stepping = 7
  Features=0xbfebfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE>
  Features2=0x59ae3bf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,POPCNT,TSCDLT,XSAVE>
  AMD Features=0x28100800<SYSCALL,NX,RDTSCP,LM>
  AMD Features2=0x1<LAHF>
  TSC: P-state invariant, performance statistics
real memory  = 17179869184 (16384 MB)
avail memory = 16408309760 (15648 MB)

Looks like I botched it - should have used the amd64 version! Will try to copy the /boot/* data from the amd64 memstick to the installation and see if it helps. (AMD threw me off as I am using the Intel CPU)
 
Reinstalled the amd64 build and all of the 16GB of memory is now in use. Here is the updated zfs_stats -a report (the system has just started, so the cache is not warmed up):

Code:
------------------------------------------------------------------------
ZFS Subsystem Report                            Sun May 14 14:58:00 2017
------------------------------------------------------------------------

System Information:

        Kernel Version:                         1100122 (osreldate)
        Hardware Platform:                      amd64
        Processor Architecture:                 amd64

        ZFS Storage pool Version:               5000
        ZFS Filesystem Version:                 5

FreeBSD 11.0-RELEASE-p1 #0 r306420: Thu Sep 29 01:43:23 UTC 2016 root
 2:58PM  up  3:04, 5 users, load averages: 0.55, 0.85, 0.58

------------------------------------------------------------------------

System Memory:

        1.10%   174.03  MiB Active,     1.93%   304.04  MiB Inact
        30.55%  4.71    GiB Wired,      0.00%   0 Cache
        66.42%  10.24   GiB Free,       0.00%   4.00    KiB Gap

        Real Installed:                         16.00   GiB
        Real Available:                 98.99%  15.84   GiB
        Real Managed:                   97.35%  15.42   GiB

        Logical Total:                          16.00   GiB
        Logical Used:                   34.14%  5.46    GiB
        Logical Free:                   65.86%  10.54   GiB

Kernel Memory:                                  321.94  MiB
        Data:                           89.20%  287.19  MiB
        Text:                           10.80%  34.75   MiB

Kernel Memory Map:                              15.42   GiB
        Size:                           19.23%  2.97    GiB
        Free:                           80.77%  12.45   GiB

------------------------------------------------------------------------

ARC Summary: (HEALTHY)
        Memory Throttle Count:                  0

ARC Misc:
        Deleted:                                37
        Recycle Misses:                         0
        Mutex Misses:                           0
        Evict Skips:                            1.33k

ARC Size:                               23.34%  3.37    GiB
        Target Size: (Adaptive)         100.00% 14.42   GiB
        Min Size (Hard Limit):          12.50%  1.80    GiB
        Max Size (High Water):          8:1     14.42   GiB

ARC Size Breakdown:
        Recently Used Cache Size:       50.00%  7.21    GiB
        Frequently Used Cache Size:     50.00%  7.21    GiB

ARC Hash Breakdown:
        Elements Max:                           281.94k
        Elements Current:               99.97%  281.87k
        Collisions:                             20.28k
        Chain Max:                              3
        Chains:                                 10.95k

------------------------------------------------------------------------

ARC Efficiency:                                 3.30m
        Cache Hit Ratio:                90.61%  2.99m
        Cache Miss Ratio:               9.39%   309.73k
        Actual Hit Ratio:               89.92%  2.97m

        Data Demand Efficiency:         99.70%  2.39m
        Data Prefetch Efficiency:       2.90%   138

        CACHE HITS BY CACHE LIST:
          Anonymously Used:             0.77%   22.88k
          Most Recently Used:           8.64%   258.39k
          Most Frequently Used:         90.59%  2.71m
          Most Recently Used Ghost:     0.00%   0
          Most Frequently Used Ghost:   0.00%   0

        CACHE HITS BY DATA TYPE:
          Demand Data:                  79.86%  2.39m
          Prefetch Data:                0.00%   4
          Demand Metadata:              19.36%  578.93k
          Prefetch Metadata:            0.77%   23.04k

        CACHE MISSES BY DATA TYPE:
          Demand Data:                  2.29%   7.08k
          Prefetch Data:                0.04%   134
          Demand Metadata:              96.03%  297.44k
          Prefetch Metadata:            1.64%   5.08k

------------------------------------------------------------------------

L2 ARC Summary: (HEALTHY)
        Passed Headroom:                        19.72k
        Tried Lock Failures:                    829
        IO In Progress:                         0
        Low Memory Aborts:                      0
        Free on Write:                          0
        Writes While Full:                      0
        R/W Clashes:                            0
        Bad Checksums:                          0
        IO Errors:                              0
        SPA Mismatch:                           106.90m

L2 ARC Size: (Adaptive)                         951.00  KiB
        Header Size:                    0.00%   0

L2 ARC Breakdown:                               95.78k
        Hit Ratio:                      0.00%   0
        Miss Ratio:                     100.00% 95.78k
        Feeds:                                  9.86k

L2 ARC Buffer:
        Bytes Scanned:                          812.30  GiB
        Buffer Iterations:                      9.86k
        List Iterations:                        39.45k
        NULL List Iterations:                   0

L2 ARC Writes:
        Writes Sent:                    100.00% 43

------------------------------------------------------------------------

File-Level Prefetch: (HEALTHY)

DMU Efficiency:                                 11.68m
        Hit Ratio:                      0.41%   48.03k
        Miss Ratio:                     99.59%  11.64m

        Colinear:                               0
          Hit Ratio:                    100.00% 0
          Miss Ratio:                   100.00% 0

        Stride:                                 0
          Hit Ratio:                    100.00% 0
          Miss Ratio:                   100.00% 0

DMU Misc:
        Reclaim:                                0
          Successes:                    100.00% 0
          Failures:                     100.00% 0

        Streams:                                0
          +Resets:                      100.00% 0
          -Resets:                      100.00% 0
          Bogus:                                0

------------------------------------------------------------------------

VDEV cache is disabled

------------------------------------------------------------------------

ZFS Tunables (sysctl):
        kern.maxusers                           1349
        vm.kmem_size                            16554647552
        vm.kmem_size_scale                      1
        vm.kmem_size_min                        0
        vm.kmem_size_max                        1319413950874
        vfs.zfs.trim.max_interval               1
        vfs.zfs.trim.timeout                    30
        vfs.zfs.trim.txg_delay                  32
        vfs.zfs.trim.enabled                    1
        vfs.zfs.vol.unmap_enabled               1
        vfs.zfs.vol.recursive                   0
        vfs.zfs.vol.mode                        1
        vfs.zfs.version.zpl                     5
        vfs.zfs.version.spa                     5000
        vfs.zfs.version.acl                     1
        vfs.zfs.version.ioctl                   6
        vfs.zfs.debug                           0
        vfs.zfs.super_owner                     0
        vfs.zfs.sync_pass_rewrite               2
        vfs.zfs.sync_pass_dont_compress         5
        vfs.zfs.sync_pass_deferred_free         2
        vfs.zfs.zio.exclude_metadata            0
        vfs.zfs.zio.use_uma                     1
        vfs.zfs.cache_flush_disable             0
        vfs.zfs.zil_replay_disable              0
        vfs.zfs.min_auto_ashift                 9
        vfs.zfs.max_auto_ashift                 13
        vfs.zfs.vdev.trim_max_pending           10000
        vfs.zfs.vdev.bio_delete_disable         0
        vfs.zfs.vdev.bio_flush_disable          0
        vfs.zfs.vdev.write_gap_limit            4096
        vfs.zfs.vdev.read_gap_limit             32768
        vfs.zfs.vdev.aggregation_limit          131072
        vfs.zfs.vdev.trim_max_active            64
        vfs.zfs.vdev.trim_min_active            1
        vfs.zfs.vdev.scrub_max_active           2
        vfs.zfs.vdev.scrub_min_active           1
        vfs.zfs.vdev.async_write_max_active     10
        vfs.zfs.vdev.async_write_min_active     1
        vfs.zfs.vdev.async_read_max_active      3
        vfs.zfs.vdev.async_read_min_active      1
        vfs.zfs.vdev.sync_write_max_active      10
        vfs.zfs.vdev.sync_write_min_active      10
        vfs.zfs.vdev.sync_read_max_active       10
        vfs.zfs.vdev.sync_read_min_active       10
        vfs.zfs.vdev.max_active                 1000
        vfs.zfs.vdev.async_write_active_max_dirty_percent60
        vfs.zfs.vdev.async_write_active_min_dirty_percent30
        vfs.zfs.vdev.mirror.non_rotating_seek_inc1
        vfs.zfs.vdev.mirror.non_rotating_inc    0
        vfs.zfs.vdev.mirror.rotating_seek_offset1048576
        vfs.zfs.vdev.mirror.rotating_seek_inc   5
        vfs.zfs.vdev.mirror.rotating_inc        0
        vfs.zfs.vdev.trim_on_init               1
        vfs.zfs.vdev.cache.bshift               16
        vfs.zfs.vdev.cache.size                 0
        vfs.zfs.vdev.cache.max                  16384
        vfs.zfs.vdev.metaslabs_per_vdev         200
        vfs.zfs.txg.timeout                     5
        vfs.zfs.space_map_blksz                 4096
        vfs.zfs.spa_slop_shift                  5
        vfs.zfs.spa_asize_inflation             24
        vfs.zfs.deadman_enabled                 1
        vfs.zfs.deadman_checktime_ms            5000
        vfs.zfs.deadman_synctime_ms             1000000
        vfs.zfs.debug_flags                     0
        vfs.zfs.recover                         0
        vfs.zfs.spa_load_verify_data            1
        vfs.zfs.spa_load_verify_metadata        1
        vfs.zfs.spa_load_verify_maxinflight     10000
        vfs.zfs.ccw_retry_interval              300
        vfs.zfs.check_hostid                    1
        vfs.zfs.mg_fragmentation_threshold      85
        vfs.zfs.mg_noalloc_threshold            0
        vfs.zfs.condense_pct                    200
        vfs.zfs.metaslab.bias_enabled           1
        vfs.zfs.metaslab.lba_weighting_enabled  1
        vfs.zfs.metaslab.fragmentation_factor_enabled1
        vfs.zfs.metaslab.preload_enabled        1
        vfs.zfs.metaslab.preload_limit          3
        vfs.zfs.metaslab.unload_delay           8
        vfs.zfs.metaslab.load_pct               50
        vfs.zfs.metaslab.min_alloc_size         33554432
        vfs.zfs.metaslab.df_free_pct            4
        vfs.zfs.metaslab.df_alloc_threshold     131072
        vfs.zfs.metaslab.debug_unload           0
        vfs.zfs.metaslab.debug_load             0
        vfs.zfs.metaslab.fragmentation_threshold70
        vfs.zfs.metaslab.gang_bang              16777217
        vfs.zfs.free_bpobj_enabled              1
        vfs.zfs.free_max_blocks                 -1
        vfs.zfs.no_scrub_prefetch               0
        vfs.zfs.no_scrub_io                     0
        vfs.zfs.resilver_min_time_ms            3000
        vfs.zfs.free_min_time_ms                1000
        vfs.zfs.scan_min_time_ms                1000
        vfs.zfs.scan_idle                       50
        vfs.zfs.scrub_delay                     4
        vfs.zfs.resilver_delay                  2
        vfs.zfs.top_maxinflight                 32
        vfs.zfs.zfetch.array_rd_sz              1048576
        vfs.zfs.zfetch.max_distance             8388608
        vfs.zfs.zfetch.min_sec_reap             2
        vfs.zfs.zfetch.max_streams              8
        vfs.zfs.prefetch_disable                0
        vfs.zfs.delay_scale                     500000
        vfs.zfs.delay_min_dirty_percent         60
        vfs.zfs.dirty_data_sync                 67108864
        vfs.zfs.dirty_data_max_percent          10
        vfs.zfs.dirty_data_max_max              4294967296
        vfs.zfs.dirty_data_max                  1700598579
        vfs.zfs.max_recordsize                  1048576
        vfs.zfs.mdcomp_disable                  0
        vfs.zfs.nopwrite_enabled                1
        vfs.zfs.dedup.prefetch                  1
        vfs.zfs.l2c_only_size                   0
        vfs.zfs.mfu_ghost_data_lsize            0
        vfs.zfs.mfu_ghost_metadata_lsize        0
        vfs.zfs.mfu_ghost_size                  0
        vfs.zfs.mfu_data_lsize                  1138573312
        vfs.zfs.mfu_metadata_lsize              14345728
        vfs.zfs.mfu_size                        1153248768
        vfs.zfs.mru_ghost_data_lsize            0
        vfs.zfs.mru_ghost_metadata_lsize        0
        vfs.zfs.mru_ghost_size                  0
        vfs.zfs.mru_data_lsize                  1649679360
        vfs.zfs.mru_metadata_lsize              31120896
        vfs.zfs.mru_size                        1904804352
        vfs.zfs.anon_data_lsize                 0
        vfs.zfs.anon_metadata_lsize             0
        vfs.zfs.anon_size                       32768
        vfs.zfs.l2arc_norw                      1
        vfs.zfs.l2arc_feed_again                1
        vfs.zfs.l2arc_noprefetch                1
        vfs.zfs.l2arc_feed_min_ms               200
        vfs.zfs.l2arc_feed_secs                 1
        vfs.zfs.l2arc_headroom                  2
        vfs.zfs.l2arc_write_boost               8388608
        vfs.zfs.l2arc_write_max                 8388608
        vfs.zfs.arc_meta_limit                  3870226432
        vfs.zfs.arc_free_target                 28061
        vfs.zfs.arc_shrink_shift                7
        vfs.zfs.arc_average_blocksize           8192
        vfs.zfs.arc_min                         1935113216
        vfs.zfs.arc_max                         15480905728

------------------------------------------------------------------------

I reduced the L2ARC size to 100GB to have more data and less meta in the in-memory cache.

If there are any hints for tuning the system, I am open for suggestions.
 
Back
Top