ZFS Help with tuning ZFS settings for large media files?

Hey, I'm looking for help to make good use of the hardware I have on my media server, which stores lots of large files: primarily 4k video, but also lots of VM backup and OS images. This is a read-heavy system, so read throughput to the network is my main goal.

Any hints you can give me on tuning based on the details below would be very welcome!

Quick system overview:
Code:
--- CPU Info ----------------------------------------------------------
CPU: AMD Ryzen 5 5600G with Radeon Graphics          (amd64) 12 cores
CPU Load: 1.74u, 1.80n, 1.63s

--- Mem Info ----------------------------------------------------------
Physical  : [###................................] 11%   (3519/31297)
Swap      : [###################################] 100%  (32768/32768)

--- ZFS Info ----------------------------------------------------------
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
media  54.6T  23.3T  31.3T        -         -     7%    42%  1.00x    ONLINE  -
stage   696G  41.9G   654G        -         -     0%     6%  1.00x    ONLINE  -
zroot   448G  54.0G   394G        -         -    11%    12%  1.00x    ONLINE  -

Here's the layout:
Code:
~# zpool status -v
  pool: media
 state: ONLINE
config:
        NAME                       STATE     READ WRITE CKSUM
        media                      ONLINE       0     0     0
          raidz2-0                 ONLINE       0     0     0
            ada0                   ONLINE       0     0     0
            ada1                   ONLINE       0     0     0
            ada2                   ONLINE       0     0     0
            ada3                   ONLINE       0     0     0
            ada6                   ONLINE       0     0     0
            ada5                   ONLINE       0     0     0
            ada4                   ONLINE       0     0     0
            ada7                   ONLINE       0     0     0
            ada8                   ONLINE       0     0     0
            ada9                   ONLINE       0     0     0
            ada10                  ONLINE       0     0     0
            ada11                  ONLINE       0     0     0

The disks in the pool media are all Seagate BarraCuda ST5000LM000 5TB. From geom:
Code:
   Mediasize: 5000981078016 (4.5T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   descr: ST5000LM000-2AN170
   rotationrate: 5400
   fwsectors: 63
   fwheads: 16

ZFS config for media:
Code:
~# zfs get all media
NAME   PROPERTY              VALUE                  SOURCE
media  type                  filesystem             -
media  creation              Wed May 25 20:54 2022  -
media  used                  17.8T                  -
media  available             23.7T                  -
media  referenced            17.7T                  -
media  compressratio         1.00x                  -
media  mounted               yes                    -
media  quota                 none                   default
media  reservation           none                   default
media  recordsize            1M                     local
media  mountpoint            /media                 default
media  sharenfs              off                    default
media  checksum              on                     default
media  compression           lz4                    local
media  atime                 off                    local
media  devices               on                     default
media  exec                  on                     default
media  setuid                on                     default
media  readonly              off                    default
media  jailed                off                    default
media  snapdir               hidden                 default
media  aclmode               discard                default
media  aclinherit            restricted             default
media  createtxg             1                      -
media  canmount              on                     default
media  xattr                 sa                     local
media  copies                1                      default
media  version               5                      -
media  utf8only              off                    -
media  normalization         none                   -
media  casesensitivity       sensitive              -
media  vscan                 off                    default
media  nbmand                off                    default
media  sharesmb              off                    default
media  refquota              none                   default
media  refreservation        none                   default
media  guid                  11565327117367880975   -
media  primarycache          all                    default
media  secondarycache        all                    default
media  usedbysnapshots       0B                     -
media  usedbydataset         17.7T                  -
media  usedbychildren        23.3G                  -
media  usedbyrefreservation  0B                     -
media  logbias               throughput             local
media  objsetid              54                     -
media  dedup                 off                    local
media  mlslabel              none                   default
media  sync                  standard               default
media  dnodesize             legacy                 default
media  refcompressratio      1.00x                  -
media  written               17.7T                  -
media  logicalused           19.3T                  -
media  logicalreferenced     19.3T                  -
media  volmode               default                default
media  filesystem_limit      none                   default
media  snapshot_limit        none                   default
media  filesystem_count      none                   default
media  snapshot_count        none                   default
media  snapdev               hidden                 default
media  acltype               nfsv4                  default
media  context               none                   default
media  fscontext             none                   default
media  defcontext            none                   default
media  rootcontext           none                   default
media  relatime              off                    default
media  redundant_metadata    all                    default
media  overlay               on                     default
media  encryption            off                    default
media  keylocation           none                   default
media  keyformat             none                   default
media  pbkdf2iters           0                      default
media  special_small_blocks  0                      default

And finally from sysctl vfs.zfs:
Code:
vfs.zfs.abd_scatter_enabled: 1
vfs.zfs.abd_scatter_min_size: 4097
vfs.zfs.allow_redacted_dataset_mount: 0
vfs.zfs.anon_data_esize: 0
vfs.zfs.anon_metadata_esize: 0
vfs.zfs.anon_size: 9631232
vfs.zfs.arc.average_blocksize: 8192
vfs.zfs.arc.dnode_limit: 0
vfs.zfs.arc.dnode_limit_percent: 10
vfs.zfs.arc.dnode_reduce_percent: 10
vfs.zfs.arc.evict_batch_limit: 10
vfs.zfs.arc.eviction_pct: 200
vfs.zfs.arc.grow_retry: 0
vfs.zfs.arc.lotsfree_percent: 10
vfs.zfs.arc.max: 0
vfs.zfs.arc.meta_adjust_restarts: 4096
vfs.zfs.arc.meta_limit: 0
vfs.zfs.arc.meta_limit_percent: 75
vfs.zfs.arc.meta_min: 0
vfs.zfs.arc.meta_prune: 10000
vfs.zfs.arc.meta_strategy: 1
vfs.zfs.arc.min: 0
vfs.zfs.arc.min_prefetch_ms: 0
vfs.zfs.arc.min_prescient_prefetch_ms: 0
vfs.zfs.arc.p_dampener_disable: 1
vfs.zfs.arc.p_min_shift: 0
vfs.zfs.arc.pc_percent: 0
vfs.zfs.arc.prune_task_threads: 1
vfs.zfs.arc.shrink_shift: 0
vfs.zfs.arc.sys_free: 0
vfs.zfs.arc_free_target: 170742
vfs.zfs.arc_max: 0
vfs.zfs.arc_min: 0
vfs.zfs.arc_no_grow_shift: 5
vfs.zfs.async_block_max_blocks: 18446744073709551615
vfs.zfs.autoimport_disable: 1
vfs.zfs.ccw_retry_interval: 300
vfs.zfs.checksum_events_per_second: 20
vfs.zfs.commit_timeout_pct: 5
vfs.zfs.compressed_arc_enabled: 1
vfs.zfs.condense.indirect_commit_entry_delay_ms: 0
vfs.zfs.condense.indirect_obsolete_pct: 25
vfs.zfs.condense.indirect_vdevs_enable: 1
vfs.zfs.condense.max_obsolete_bytes: 1073741824
vfs.zfs.condense.min_mapping_bytes: 131072
vfs.zfs.condense_pct: 200
vfs.zfs.crypt_sessions: 0
vfs.zfs.dbgmsg_enable: 1
vfs.zfs.dbgmsg_maxsize: 4194304
vfs.zfs.dbuf.cache_shift: 5
vfs.zfs.dbuf.metadata_cache_max_bytes: 18446744073709551615
vfs.zfs.dbuf.metadata_cache_shift: 6
vfs.zfs.dbuf_cache.hiwater_pct: 10
vfs.zfs.dbuf_cache.lowater_pct: 10
vfs.zfs.dbuf_cache.max_bytes: 18446744073709551615
vfs.zfs.dbuf_state_index: 0
vfs.zfs.ddt_data_is_special: 1
vfs.zfs.deadman.checktime_ms: 60000
vfs.zfs.deadman.enabled: 1
vfs.zfs.deadman.failmode: wait
vfs.zfs.deadman.synctime_ms: 600000
vfs.zfs.deadman.ziotime_ms: 300000
vfs.zfs.debug: 0
vfs.zfs.debugflags: 0
vfs.zfs.dedup.prefetch: 0
vfs.zfs.default_bs: 9
vfs.zfs.default_ibs: 17
vfs.zfs.delay_min_dirty_percent: 60
vfs.zfs.delay_scale: 500000
vfs.zfs.dirty_data_max: 8421990400
vfs.zfs.dirty_data_max_max: 17179869184
vfs.zfs.dirty_data_max_max_percent: 25
vfs.zfs.dirty_data_max_percent: 25
vfs.zfs.dirty_data_sync_percent: 20
vfs.zfs.disable_ivset_guid_check: 0
vfs.zfs.dmu_object_alloc_chunk_shift: 7
vfs.zfs.dmu_offset_next_sync: 0
vfs.zfs.dmu_prefetch_max: 134217728
vfs.zfs.dtl_sm_blksz: 4096
vfs.zfs.embedded_slog_min_ms: 64
vfs.zfs.flags: 0
vfs.zfs.fletcher_4_impl: [fastest] scalar superscalar superscalar4 sse2 ssse3 avx2
vfs.zfs.free_bpobj_enabled: 1
vfs.zfs.free_leak_on_eio: 0
vfs.zfs.free_min_time_ms: 1000
vfs.zfs.history_output_max: 1048576
vfs.zfs.immediate_write_sz: 32768
vfs.zfs.initialize_chunk_size: 1048576
vfs.zfs.initialize_value: 16045690984833335022
vfs.zfs.keep_log_spacemaps_at_export: 0
vfs.zfs.l2arc.feed_again: 1
vfs.zfs.l2arc.feed_min_ms: 100
vfs.zfs.l2arc.feed_secs: 1
vfs.zfs.l2arc.headroom: 2
vfs.zfs.l2arc.headroom_boost: 200
vfs.zfs.l2arc.meta_percent: 33
vfs.zfs.l2arc.mfuonly: 0
vfs.zfs.l2arc.noprefetch: 0
vfs.zfs.l2arc.norw: 0
vfs.zfs.l2arc.rebuild_blocks_min_l2size: 1073741824
vfs.zfs.l2arc.rebuild_enabled: 1
vfs.zfs.l2arc.trim_ahead: 0
vfs.zfs.l2arc.write_boost: 16777216
vfs.zfs.l2arc.write_max: 8388608
vfs.zfs.l2arc_feed_again: 1
vfs.zfs.l2arc_feed_min_ms: 100
vfs.zfs.l2arc_feed_secs: 1
vfs.zfs.l2arc_headroom: 2
vfs.zfs.l2arc_noprefetch: 0
vfs.zfs.l2arc_norw: 0
vfs.zfs.l2arc_write_boost: 16777216
vfs.zfs.l2arc_write_max: 8388608
vfs.zfs.l2c_only_size: 0
vfs.zfs.livelist.condense.new_alloc: 0
vfs.zfs.livelist.condense.sync_cancel: 0
vfs.zfs.livelist.condense.sync_pause: 0
vfs.zfs.livelist.condense.zthr_cancel: 0
vfs.zfs.livelist.condense.zthr_pause: 0
vfs.zfs.livelist.max_entries: 500000
vfs.zfs.livelist.min_percent_shared: 75
vfs.zfs.lua.max_instrlimit: 100000000
vfs.zfs.lua.max_memlimit: 104857600
vfs.zfs.max_async_dedup_frees: 100000
vfs.zfs.max_auto_ashift: 16
vfs.zfs.max_dataset_nesting: 50
vfs.zfs.max_log_walking: 5
vfs.zfs.max_logsm_summary_length: 10
vfs.zfs.max_missing_tvds: 0
vfs.zfs.max_missing_tvds_cachefile: 2
vfs.zfs.max_missing_tvds_scan: 0
vfs.zfs.max_nvlist_src_size: 0
vfs.zfs.max_recordsize: 1048576
vfs.zfs.metaslab.aliquot: 524288
vfs.zfs.metaslab.bias_enabled: 1
vfs.zfs.metaslab.debug_load: 0
vfs.zfs.metaslab.debug_unload: 0
vfs.zfs.metaslab.df_alloc_threshold: 131072
vfs.zfs.metaslab.df_free_pct: 4
vfs.zfs.metaslab.df_max_search: 16777216
vfs.zfs.metaslab.df_use_largest_segment: 0
vfs.zfs.metaslab.find_max_tries: 100
vfs.zfs.metaslab.force_ganging: 16777217
vfs.zfs.metaslab.fragmentation_factor_enabled: 1
vfs.zfs.metaslab.fragmentation_threshold: 70
vfs.zfs.metaslab.lba_weighting_enabled: 1
vfs.zfs.metaslab.load_pct: 50
vfs.zfs.metaslab.max_size_cache_sec: 3600
vfs.zfs.metaslab.mem_limit: 25
vfs.zfs.metaslab.preload_enabled: 1
vfs.zfs.metaslab.preload_limit: 10
vfs.zfs.metaslab.segment_weight_enabled: 1
vfs.zfs.metaslab.sm_blksz_no_log: 16384
vfs.zfs.metaslab.sm_blksz_with_log: 131072
vfs.zfs.metaslab.switch_threshold: 2
vfs.zfs.metaslab.try_hard_before_gang: 0
vfs.zfs.metaslab.unload_delay: 32
vfs.zfs.metaslab.unload_delay_ms: 600000
vfs.zfs.mfu_data_esize: 5370967552
vfs.zfs.mfu_ghost_data_esize: 9675108352
vfs.zfs.mfu_ghost_metadata_esize: 5561164800
vfs.zfs.mfu_ghost_size: 15236273152
vfs.zfs.mfu_metadata_esize: 5232128
vfs.zfs.mfu_size: 7397669376
vfs.zfs.mg.fragmentation_threshold: 95
vfs.zfs.mg.noalloc_threshold: 0
vfs.zfs.min_auto_ashift: 12
vfs.zfs.min_metaslabs_to_flush: 1
vfs.zfs.mru_data_esize: 14734262784
vfs.zfs.mru_ghost_data_esize: 5476918272
vfs.zfs.mru_ghost_metadata_esize: 2524627456
vfs.zfs.mru_ghost_size: 8001545728
vfs.zfs.mru_metadata_esize: 43209728
vfs.zfs.mru_size: 14908360704
vfs.zfs.multihost.fail_intervals: 10
vfs.zfs.multihost.history: 0
vfs.zfs.multihost.import_intervals: 20
vfs.zfs.multihost.interval: 1000
vfs.zfs.multilist_num_sublists: 0
vfs.zfs.no_scrub_io: 0
vfs.zfs.no_scrub_prefetch: 0
vfs.zfs.nocacheflush: 0
vfs.zfs.nopwrite_enabled: 1
vfs.zfs.obsolete_min_time_ms: 500
vfs.zfs.pd_bytes_max: 52428800
vfs.zfs.per_txg_dirty_frees_percent: 5
vfs.zfs.prefetch.array_rd_sz: 1048576
vfs.zfs.prefetch.disable: 0
vfs.zfs.prefetch.max_distance: 8388608
vfs.zfs.prefetch.max_idistance: 67108864
vfs.zfs.prefetch.max_streams: 8
vfs.zfs.prefetch.min_sec_reap: 2
vfs.zfs.read_history: 0
vfs.zfs.read_history_hits: 0
vfs.zfs.rebuild_max_segment: 1048576
vfs.zfs.rebuild_scrub_enabled: 1
vfs.zfs.rebuild_vdev_limit: 33554432
vfs.zfs.reconstruct.indirect_combinations_max: 4096
vfs.zfs.recover: 0
vfs.zfs.recv.queue_ff: 20
vfs.zfs.recv.queue_length: 16777216
vfs.zfs.recv.write_batch_size: 1048576
vfs.zfs.removal_suspend_progress: 0
vfs.zfs.remove_max_segment: 16777216
vfs.zfs.resilver_disable_defer: 0
vfs.zfs.resilver_min_time_ms: 1000
vfs.zfs.scan_checkpoint_intval: 7200
vfs.zfs.scan_fill_weight: 3
vfs.zfs.scan_ignore_errors: 0
vfs.zfs.scan_issue_strategy: 0
vfs.zfs.scan_legacy: 0
vfs.zfs.scan_max_ext_gap: 2097152
vfs.zfs.scan_mem_lim_fact: 20
vfs.zfs.scan_mem_lim_soft_fact: 20
vfs.zfs.scan_strict_mem_lim: 0
vfs.zfs.scan_suspend_progress: 0
vfs.zfs.scan_vdev_limit: 4194304
vfs.zfs.scrub_min_time_ms: 1000
vfs.zfs.send.corrupt_data: 0
vfs.zfs.send.no_prefetch_queue_ff: 20
vfs.zfs.send.no_prefetch_queue_length: 1048576
vfs.zfs.send.override_estimate_recordsize: 0
vfs.zfs.send.queue_ff: 20
vfs.zfs.send.queue_length: 16777216
vfs.zfs.send.unmodified_spill_blocks: 1
vfs.zfs.send_holes_without_birth_time: 1
vfs.zfs.slow_io_events_per_second: 20
vfs.zfs.spa.asize_inflation: 24
vfs.zfs.spa.discard_memory_limit: 16777216
vfs.zfs.spa.load_print_vdev_tree: 0
vfs.zfs.spa.load_verify_data: 1
vfs.zfs.spa.load_verify_metadata: 1
vfs.zfs.spa.load_verify_shift: 4
vfs.zfs.spa.slop_shift: 5
vfs.zfs.space_map_ibs: 14
vfs.zfs.special_class_metadata_reserve_pct: 25
vfs.zfs.standard_sm_blksz: 131072
vfs.zfs.super_owner: 0
vfs.zfs.sync_pass_deferred_free: 2
vfs.zfs.sync_pass_dont_compress: 8
vfs.zfs.sync_pass_rewrite: 2
vfs.zfs.sync_taskq_batch_pct: 75
vfs.zfs.top_maxinflight: 2048
vfs.zfs.traverse_indirect_prefetch_limit: 32
vfs.zfs.trim.extent_bytes_max: 134217728
vfs.zfs.trim.extent_bytes_min: 32768
vfs.zfs.trim.metaslab_skip: 0
vfs.zfs.trim.queue_limit: 10
vfs.zfs.trim.txg_batch: 32
vfs.zfs.txg.history: 100
vfs.zfs.txg.timeout: 5
vfs.zfs.unflushed_log_block_max: 262144
vfs.zfs.unflushed_log_block_min: 1000
vfs.zfs.unflushed_log_block_pct: 400
vfs.zfs.unflushed_max_mem_amt: 1073741824
vfs.zfs.unflushed_max_mem_ppm: 1000
vfs.zfs.user_indirect_is_special: 1
vfs.zfs.validate_skip: 0
vfs.zfs.vdev.aggregate_trim: 0
vfs.zfs.vdev.aggregation_limit: 1048576
vfs.zfs.vdev.aggregation_limit_non_rotating: 131072
vfs.zfs.vdev.async_read_max_active: 3
vfs.zfs.vdev.async_read_min_active: 1
vfs.zfs.vdev.async_write_active_max_dirty_percent: 60
vfs.zfs.vdev.async_write_active_min_dirty_percent: 30
vfs.zfs.vdev.async_write_max_active: 10
vfs.zfs.vdev.async_write_min_active: 2
vfs.zfs.vdev.bio_delete_disable: 0
vfs.zfs.vdev.bio_flush_disable: 0
vfs.zfs.vdev.cache_bshift: 16
vfs.zfs.vdev.cache_max: 16384
vfs.zfs.vdev.cache_size: 0
vfs.zfs.vdev.def_queue_depth: 32
vfs.zfs.vdev.default_ms_count: 200
vfs.zfs.vdev.default_ms_shift: 29
vfs.zfs.vdev.file.logical_ashift: 9
vfs.zfs.vdev.file.physical_ashift: 9
vfs.zfs.vdev.initializing_max_active: 1
vfs.zfs.vdev.initializing_min_active: 1
vfs.zfs.vdev.max_active: 2048
vfs.zfs.vdev.max_auto_ashift: 16
vfs.zfs.vdev.min_auto_ashift: 12
vfs.zfs.vdev.min_ms_count: 16
vfs.zfs.vdev.mirror.non_rotating_inc: 0
vfs.zfs.vdev.mirror.non_rotating_seek_inc: 1
vfs.zfs.vdev.mirror.rotating_inc: 0
vfs.zfs.vdev.mirror.rotating_seek_inc: 5
vfs.zfs.vdev.mirror.rotating_seek_offset: 1048576
vfs.zfs.vdev.ms_count_limit: 131072
vfs.zfs.vdev.nia_credit: 5
vfs.zfs.vdev.nia_delay: 5
vfs.zfs.vdev.queue_depth_pct: 1000
vfs.zfs.vdev.read_gap_limit: 32768
vfs.zfs.vdev.rebuild_max_active: 3
vfs.zfs.vdev.rebuild_min_active: 1
vfs.zfs.vdev.removal_ignore_errors: 0
vfs.zfs.vdev.removal_max_active: 2
vfs.zfs.vdev.removal_max_span: 32768
vfs.zfs.vdev.removal_min_active: 1
vfs.zfs.vdev.removal_suspend_progress: 0
vfs.zfs.vdev.remove_max_segment: 16777216
vfs.zfs.vdev.scrub_max_active: 3
vfs.zfs.vdev.scrub_min_active: 1
vfs.zfs.vdev.sync_read_max_active: 10
vfs.zfs.vdev.sync_read_min_active: 10
vfs.zfs.vdev.sync_write_max_active: 10
vfs.zfs.vdev.sync_write_min_active: 10
vfs.zfs.vdev.trim_max_active: 2
vfs.zfs.vdev.trim_min_active: 1
vfs.zfs.vdev.validate_skip: 0
vfs.zfs.vdev.write_gap_limit: 4096
vfs.zfs.version.acl: 1
vfs.zfs.version.ioctl: 15
vfs.zfs.version.module: 2.1.4-FreeBSD_g52bad4f23
vfs.zfs.version.spa: 5000
vfs.zfs.version.zpl: 5
vfs.zfs.vnops.read_chunk_size: 1048576
vfs.zfs.vol.mode: 1
vfs.zfs.vol.recursive: 0
vfs.zfs.vol.unmap_enabled: 1
vfs.zfs.zap_iterate_prefetch: 1
vfs.zfs.zevent.len_max: 512
vfs.zfs.zevent.retain_expire_secs: 900
vfs.zfs.zevent.retain_max: 2000
vfs.zfs.zfetch.max_distance: 8388608
vfs.zfs.zfetch.max_idistance: 67108864
vfs.zfs.zil.clean_taskq_maxalloc: 1048576
vfs.zfs.zil.clean_taskq_minalloc: 1024
vfs.zfs.zil.clean_taskq_nthr_pct: 100
vfs.zfs.zil.maxblocksize: 131072
vfs.zfs.zil.nocacheflush: 0
vfs.zfs.zil.replay_disable: 0
vfs.zfs.zil.slog_bulk: 786432
vfs.zfs.zio.deadman_log_all: 0
vfs.zfs.zio.dva_throttle_enabled: 1
vfs.zfs.zio.exclude_metadata: 0
vfs.zfs.zio.requeue_io_start_cut_in_line: 1
vfs.zfs.zio.slow_io_ms: 30000
vfs.zfs.zio.taskq_batch_pct: 80
vfs.zfs.zio.taskq_batch_tpq: 0
vfs.zfs.zio.use_uma: 1

Let me know if there's anything else I need to add!
 
Interesting problem to have :) I don't have much experience in this specific problem but in general ARC, L2ARC help with read speeds, especially if L2ARC is on a separate device.
If the files are big, I think setting ashift to greater than 12 to use bigger blocksize could make a difference but that would require recreating the pool.
 
In case it helps, I also have the following from zfs-stats:

Code:
------------------------------------------------------------------------
ZFS Subsystem Report                            Mon Sep 19 17:22:33 2022
------------------------------------------------------------------------

System Information:

        Kernel Version:                         1301000 (osreldate)
        Hardware Platform:                      amd64
        Processor Architecture:                 amd64

        ZFS Storage pool Version:               5000
        ZFS Filesystem Version:                 5

FreeBSD 13.1-RELEASE releng/13.1-n250148-fc952ac2212 GENERIC 5:22PM  up 2 days, 21:01, 1 user, load averages: 0.40, 0.54, 0.49

------------------------------------------------------------------------

System Memory:

        0.96%   301.37  MiB Active,     9.32%   2.85    GiB Inact
        86.96%  26.58   GiB Wired,      0.00%   0       Bytes Cache
        2.64%   825.71  MiB Free,       0.13%   39.16   MiB Gap

        Real Installed:                         32.00   GiB
        Real Available:                 98.04%  31.37   GiB
        Real Managed:                   97.42%  30.56   GiB

        Logical Total:                          32.00   GiB
        Logical Used:                   88.58%  28.35   GiB
        Logical Free:                   11.42%  3.65    GiB

Kernel Memory:                                  816.79  MiB
        Data:                           95.19%  777.49  MiB
        Text:                           4.81%   39.29   MiB

Kernel Memory Map:                              30.56   GiB
        Size:                           82.91%  25.34   GiB
        Free:                           17.09%  5.22    GiB

------------------------------------------------------------------------

ARC Summary: (HEALTHY)
        Memory Throttle Count:                  0

ARC Misc:
        Deleted:                                1.13    m
        Mutex Misses:                           188
        Evict Skips:                            160

ARC Size:                               70.67%  21.47   GiB
        Target Size: (Adaptive)         70.78%  21.50   GiB
        Min Size (Hard Limit):          3.23%   1003.98 MiB
        Max Size (High Water):          30:1    30.37   GiB
        Compressed Data Size:                   18.91   GiB
        Decompressed Data Size:                 25.58   GiB
        Compression Factor:                     1.35

ARC Size Breakdown:
        Recently Used Cache Size:       76.01%  16.34   GiB
        Frequently Used Cache Size:     23.99%  5.16    GiB

ARC Hash Breakdown:
        Elements Max:                           681.91  k
        Elements Current:               99.43%  678.01  k
        Collisions:                             498.24  k
        Chain Max:                              4
        Chains:                                 49.49   k

------------------------------------------------------------------------

ARC Efficiency:                                 235.07  m
        Cache Hit Ratio:                99.24%  233.28  m
        Cache Miss Ratio:               0.76%   1.79    m
        Actual Hit Ratio:               99.24%  233.27  m

        Data Demand Efficiency:         99.76%  113.59  m
        Data Prefetch Efficiency:       0.22%   637.14  k

        CACHE HITS BY CACHE LIST:
          Most Recently Used:           43.20%  100.77  m
          Most Frequently Used:         56.80%  132.50  m
          Most Recently Used Ghost:     0.06%   147.16  k
          Most Frequently Used Ghost:   0.09%   205.71  k

        CACHE HITS BY DATA TYPE:
          Demand Data:                  48.58%  113.33  m
          Prefetch Data:                0.00%   1.40    k
          Demand Metadata:              51.39%  119.88  m
          Prefetch Metadata:            0.03%   75.51   k

        CACHE MISSES BY DATA TYPE:
          Demand Data:                  14.94%  267.98  k
          Prefetch Data:                35.45%  635.74  k
          Demand Metadata:              42.77%  767.08  k
          Prefetch Metadata:            6.84%   122.66  k

------------------------------------------------------------------------

L2ARC is disabled

------------------------------------------------------------------------

Dataset statistics for: media

        Reads:          99.99%  102.01  m
        Writes:         0.01%   10.10   k
        Unlinks:        0.00%   4.41    k

        Bytes read:     98.59%  646.98  b
        Bytes written:  1.41%   9.26    b

------------------------------------------------------------------------

File-Level Prefetch:

DMU Efficiency:                                 6.92    m
        Hit Ratio:                      28.91%  2.00    m
        Miss Ratio:                     71.09%  4.92    m

------------------------------------------------------------------------

VDEV cache is disabled

------------------------------------------------------------------------
 
Large files (particularly 4K video files tend to be quite large) aren't cached. I'm not sure where the "cut off point" is but above a certain size files just aren't cached. Usually it's pointless, as you would constantly have your entire cache invalidated because a single 100GB file would completely fill the cache.

My experience with FreeBSD tells me not to tweak or tune anything unless I'm having problems. Don't tune for the sake of tuning, most of the time the system is fairly good at tuning itself.
 
Back
Top