ZFS Extend swap space

Hi mates!
How to grow swap partition with zroot?
/etc/fstab
Code:
root@tank:/home # cat /etc/fstab
# Device        Mountpoint    FStype    Options        Dump    Pass#
/dev/ada0p1        /boot/efi    msdosfs    rw        2    2
/dev/ada0p3        none        swap    sw        0    0
df -h
Code:
root@tank:/home # df -h
Filesystem            Size    Used   Avail Capacity  Mounted on
zroot/ROOT/default    894G    7.8G    886G     1%    /
devfs                 1.0K    1.0K      0B   100%    /dev
/dev/ada0p1           260M    1.8M    258M     1%    /boot/efi
storage               7.1T    3.1T    4.0T    44%    /storage
zroot/var/mail        886G    616K    886G     0%    /var/mail
zroot/var/tmp         886G    776K    886G     0%    /var/tmp
zroot/var/log         886G    7.5M    886G     0%    /var/log
zroot/var/crash       886G     96K    886G     0%    /var/crash
zroot/var/audit       886G     96K    886G     0%    /var/audit
zroot/usr/src         887G    1.0G    886G     0%    /usr/src
zroot/tmp             886G    236K    886G     0%    /tmp
zroot                 886G     96K    886G     0%    /zroot
zroot/usr/ports       890G    4.3G    886G     0%    /usr/ports
zroot/usr/home        886G     37M    886G     0%    /usr/home
I need to reduce zroot pool and increase swap /dev/ada0p3 .
 
You can enable few swap devices even in the same time.
So if your primary goal is to increase swap size then just enable second swap anywhere.
It is possible to have a swap even in a file located on any partition with enough free space.
Handbook article: 13.12.2. Creating a Swap File
or
12.15.3. Swapfiles

Related manpages: swapinfo() swapon() swapoff()
 
No need to reduce space on the pool. You can create a volume from your current pool and use it in your /etc/fstab.
zfs create -V 4G zpool/swap
 
Conventional wisdom has been that you should not swap into a ZFS file system. This is because of the way that ZFS demands memory, it can get into a deadly embrace when swap space is exhausted.

I know that there's been work done on the problem, and perhaps somebody with a better insight on the current state of play will comment. It might have been fixed...

Showing us the output of sudo gpart show would help us enumerate the options.
 
Showing us the output of sudo gpart show would help us enumerate the options.
Here
Code:
root@tank:/ # gpart show
=>        40  1953525088  ada0  GPT  (932G)
          40      532480     1  efi  (260M)
      532520        1024     2  freebsd-boot  (512K)
      533544         984        - free -  (492K)
      534528     4194304     3  freebsd-swap  (2.0G)
     4728832  1948794880     4  freebsd-zfs  (929G)
  1953523712        1416        - free -  (708K)

=>        40  7814037088  ada1  GPT  (3.6T)
          40        2008        - free -  (1.0M)
        2048  7814035080     1  freebsd-zfs  (3.6T)

=>        40  7814037088  ada2  GPT  (3.6T)
          40        2008        - free -  (1.0M)
        2048  7814035080     1  freebsd-zfs  (3.6T)

=>        40  7814037088  ada3  GPT  (3.6T)
          40        2008        - free -  (1.0M)
        2048  7814035080     1  freebsd-zfs  (3.6T)
 
Conventional wisdom has been that you should not swap into a ZFS file system

SWAP should always be considered a last resort only for rare occurences. If you find yourself in need of "more SWAP" because it is used (and even exhausted) at a regular basis, what you REALLY have to increase is RAM (or change your workload).

So for a temporary band-aid swap on ZFS is fine. SWAP will always be many orders of magnitude slower than RAM anyways and can't keep an overwhelmed system at an reasonably usable state, so it is negligible that ZFS is not the 'best' option for SWAP; it will still somewhat work to keep a system somewhat alive until the OOM killer kicks in.
Under memory pressure ZFS already reduces RAM usage to a bare minimum, i.e. shrinking the ARC and at a bare minimum only trying to keep metadata like e.g. allocation tables in RAM. If you are THAT low on RAM that you can't even keep this minimal set of data aside your regular application data in memory, the system is simply undersized.

RE the original question: you can't shrink a vdev.
As you didn't disclose your pool layout I'm assuming from the gpart output that this pool resides on a single vdev; so without another disk there are no options other than moving that pool to another (bigger) drive or backup (send|recv) and restore to a smaller pool on the same drive.
My suggestion if physical space allows it (and the system is already maxed out on RAM!): add a second, bigger disk for the root pool; create efi, boot, swap and a zfs partition (same size as or bigger than current vdev) and zfs attach that to the vdev in your root pool. After resilvering you can safely zfs detach the original vdev and either replace that disk with a second new drive, or keep the system running with that single vdev.
 
Here
Code:
root@tank:/ # gpart show
=>        40  1953525088  ada0  GPT  (932G)
          40      532480     1  efi  (260M)
      532520        1024     2  freebsd-boot  (512K)
      533544         984        - free -  (492K)
      534528     4194304     3  freebsd-swap  (2.0G)
     4728832  1948794880     4  freebsd-zfs  (929G)
  1953523712        1416        - free -  (708K)

=>        40  7814037088  ada1  GPT  (3.6T)
          40        2008        - free -  (1.0M)
        2048  7814035080     1  freebsd-zfs  (3.6T)

=>        40  7814037088  ada2  GPT  (3.6T)
          40        2008        - free -  (1.0M)
        2048  7814035080     1  freebsd-zfs  (3.6T)

=>        40  7814037088  ada3  GPT  (3.6T)
          40        2008        - free -  (1.0M)
        2048  7814035080     1  freebsd-zfs  (3.6T)
You can not resize partitions on ada0 to get more swapspace.

It looks like you did a standard installation on ada0 with zfs and afterwards added a new pool "/storage" using ada1,ada2 and ada3.

If that is the case, one option whould be to create a backup pool in /storage and copy all userdata, like /usr/local/* and /home/* from zroot to the backup directory.
Then export /storage
reinstall freebsd on ada0 with whatever size swap partition you want and import pool /storage.

Just make sure you know which disk is ada0 ;-)
 
reinstall freebsd
Need to check another solutions :)
Then maybe I need to optimize RAM for ZFS pools? At the moment I have 32Gb of RAM and 8 TB storage pool with zroot with 1TB. I didn't change any settings with RAM and it capacity for ZFS.
zfs-stats
Code:
------------------------------------------------------------------------
ZFS Subsystem Report                Tue Mar 14 16:46:29 2023
------------------------------------------------------------------------

System Information:

    Kernel Version:                1301000 (osreldate)
    Hardware Platform:            amd64
    Processor Architecture:            amd64

    ZFS Storage pool Version:        5000
    ZFS Filesystem Version:            5

FreeBSD 13.1-RELEASE-p7 releng/13.1-00935d2e5 tank_kernell 4:46PM  up 15 days, 23:59, 1 user, load averages: 0.91, 0.74, 0.66

------------------------------------------------------------------------

System Memory:

    0.33%    104.77    MiB Active,    4.97%    1.54    GiB Inact
    91.12%    28.21    GiB Wired,    0.00%    0    Bytes Cache
    2.73%    864.33    MiB Free,    0.85%    268.58    MiB Gap

    Real Installed:                32.00    GiB
    Real Available:            99.47%    31.83    GiB
    Real Managed:            97.28%    30.96    GiB

    Logical Total:                32.00    GiB
    Logical Used:            92.55%    29.62    GiB
    Logical Free:            7.45%    2.38    GiB

Kernel Memory:                    701.66    MiB
    Data:                96.81%    679.25    MiB
    Text:                3.19%    22.41    MiB

Kernel Memory Map:                30.96    GiB
    Size:                90.67%    28.07    GiB
    Free:                9.33%    2.89    GiB

------------------------------------------------------------------------

ARC Summary: (HEALTHY)
    Memory Throttle Count:            0

ARC Misc:
    Deleted:                128.86    m
    Mutex Misses:                15.16    k
    Evict Skips:                7.46    k

ARC Size:                81.89%    25.25    GiB
    Target Size: (Adaptive)        81.91%    25.25    GiB
    Min Size (Hard Limit):        3.23%    1018.54    MiB
    Max Size (High Water):        30:1    30.83    GiB
    Compressed Data Size:            22.85    GiB
    Decompressed Data Size:            23.47    GiB
    Compression Factor:            1.03

ARC Size Breakdown:
    Recently Used Cache Size:    57.83%    14.60    GiB
    Frequently Used Cache Size:    42.17%    10.65    GiB

ARC Hash Breakdown:
    Elements Max:                397.04    k
    Elements Current:        48.23%    191.50    k
    Collisions:                9.20    m
    Chain Max:                4
    Chains:                    4.28    k

------------------------------------------------------------------------

ARC Efficiency:                    1.38    b
    Cache Hit Ratio:        90.43%    1.24    b
    Cache Miss Ratio:        9.57%    131.76    m
    Actual Hit Ratio:        90.30%    1.24    b

    Data Demand Efficiency:        95.81%    847.60    m
    Data Prefetch Efficiency:    3.15%    94.81    m

    CACHE HITS BY CACHE LIST:
      Most Recently Used:        41.05%    510.97    m
      Most Frequently Used:        58.81%    732.16    m
      Most Recently Used Ghost:    0.35%    4.36    m
      Most Frequently Used Ghost:    0.43%    5.35    m

    CACHE HITS BY DATA TYPE:
      Demand Data:            65.24%    812.11    m
      Prefetch Data:        0.24%    2.99    m
      Demand Metadata:        34.50%    429.52    m
      Prefetch Metadata:        0.02%    257.57    k

    CACHE MISSES BY DATA TYPE:
      Demand Data:            26.93%    35.49    m
      Prefetch Data:        69.69%    91.83    m
      Demand Metadata:        3.01%    3.97    m
      Prefetch Metadata:        0.36%    477.58    k

------------------------------------------------------------------------

L2ARC is disabled

------------------------------------------------------------------------

Dataset statistics for:    storage

    Reads:        99.45%    785.67    m
    Writes:        0.54%    4.25    m
    Unlinks:    0.02%    126.31    k

    Bytes read:    94.03%    13.79    t
    Bytes written:    5.97%    875.51    b

Dataset statistics for:    zroot/ROOT/default

    Reads:        31.13%    23.67    m
    Writes:        64.48%    49.04    m
    Unlinks:    4.39%    3.34    m

    Bytes read:    22.84%    128.52    b
    Bytes written:    77.16%    434.25    b

Dataset statistics for:    zroot/tmp

    Reads:        32.13%    9.31    k
    Writes:        49.80%    14.43    k
    Unlinks:    18.07%    5.24    k

    Bytes read:    44.30%    47.75    m
    Bytes written:    55.70%    60.02    m

Dataset statistics for:    zroot/usr/home

    Reads:        10.77%    3.93    k
    Writes:        88.62%    32.29    k
    Unlinks:    0.60%    220

    Bytes read:    43.45%    508.47    m
    Bytes written:    56.55%    661.66    m

Dataset statistics for:    zroot/usr/ports

    Reads:        66.09%    2.10    m
    Writes:        26.16%    829.46    k
    Unlinks:    7.75%    245.66    k

    Bytes read:    54.48%    9.52    b
    Bytes written:    45.52%    7.96    b

Dataset statistics for:    zroot/usr/src

    Reads:        98.05%    755
    Writes:        0.65%    5
    Unlinks:    1.30%    10

    Bytes read:    99.97%    1.51    m
    Bytes written:    0.03%    470

Dataset statistics for:    zroot/var/log

    Reads:        0.93%    6.28    k
    Writes:        99.03%    669.20    k
    Unlinks:    0.04%    266

    Bytes read:    55.15%    134.69    m
    Bytes written:    44.85%    109.54    m

Dataset statistics for:    zroot/var/mail

    Reads:        6.86%    7
    Writes:        63.73%    65
    Unlinks:    29.41%    30

    Bytes read:    35.85%    27.92    k
    Bytes written:    64.15%    49.96    k

Dataset statistics for:    zroot/var/tmp

    Reads:        56.01%    28.19    k
    Writes:        43.79%    22.04    k
    Unlinks:    0.20%    100

    Bytes read:    53.43%    33.57    m
    Bytes written:    46.57%    29.26    m


------------------------------------------------------------------------

File-Level Prefetch:

DMU Efficiency:                    143.40    m
    Hit Ratio:            30.77%    44.13    m
    Miss Ratio:            69.23%    99.28    m

------------------------------------------------------------------------

VDEV cache is disabled

------------------------------------------------------------------------

ZFS Tunables (sysctl):
    kern.maxusers                           2373
    vm.kmem_size                            33245634560
    vm.kmem_size_scale                      1
    vm.kmem_size_min                        0
    vm.kmem_size_max                        1319413950874
    vfs.zfs.sync_pass_rewrite               2
    vfs.zfs.sync_pass_dont_compress         8
    vfs.zfs.sync_pass_deferred_free         2
    vfs.zfs.commit_timeout_pct              5
    vfs.zfs.history_output_max              1048576
    vfs.zfs.max_nvlist_src_size             0
    vfs.zfs.dbgmsg_maxsize                  4194304
    vfs.zfs.dbgmsg_enable                   1
    vfs.zfs.zap_iterate_prefetch            1
    vfs.zfs.rebuild_scrub_enabled           1
    vfs.zfs.rebuild_vdev_limit              33554432
    vfs.zfs.rebuild_max_segment             1048576
    vfs.zfs.initialize_chunk_size           1048576
    vfs.zfs.initialize_value                -2401053088876216594
    vfs.zfs.embedded_slog_min_ms            64
    vfs.zfs.nocacheflush                    0
    vfs.zfs.scan_ignore_errors              0
    vfs.zfs.checksum_events_per_second      20
    vfs.zfs.slow_io_events_per_second       20
    vfs.zfs.read_history_hits               0
    vfs.zfs.read_history                    0
    vfs.zfs.special_class_metadata_reserve_pct25
    vfs.zfs.user_indirect_is_special        1
    vfs.zfs.ddt_data_is_special             1
    vfs.zfs.free_leak_on_eio                0
    vfs.zfs.recover                         0
    vfs.zfs.flags                           0
    vfs.zfs.keep_log_spacemaps_at_export    0
    vfs.zfs.min_metaslabs_to_flush          1
    vfs.zfs.max_logsm_summary_length        10
    vfs.zfs.max_log_walking                 5
    vfs.zfs.unflushed_log_block_pct         400
    vfs.zfs.unflushed_log_block_min         1000
    vfs.zfs.unflushed_log_block_max         262144
    vfs.zfs.unflushed_max_mem_ppm           1000
    vfs.zfs.unflushed_max_mem_amt           1073741824
    vfs.zfs.autoimport_disable              1
    vfs.zfs.max_missing_tvds                0
    vfs.zfs.multilist_num_sublists          0
    vfs.zfs.resilver_disable_defer          0
    vfs.zfs.scan_fill_weight                3
    vfs.zfs.scan_strict_mem_lim             0
    vfs.zfs.scan_mem_lim_soft_fact          20
    vfs.zfs.scan_max_ext_gap                2097152
    vfs.zfs.scan_checkpoint_intval          7200
    vfs.zfs.scan_legacy                     0
    vfs.zfs.scan_issue_strategy             0
    vfs.zfs.scan_mem_lim_fact               20
    vfs.zfs.free_bpobj_enabled              1
    vfs.zfs.max_async_dedup_frees           100000
    vfs.zfs.async_block_max_blocks          -1
    vfs.zfs.no_scrub_prefetch               0
    vfs.zfs.no_scrub_io                     0
    vfs.zfs.scan_suspend_progress           0
    vfs.zfs.resilver_min_time_ms            3000
    vfs.zfs.free_min_time_ms                1000
    vfs.zfs.obsolete_min_time_ms            500
    vfs.zfs.scrub_min_time_ms               1000
    vfs.zfs.scan_vdev_limit                 4194304
    vfs.zfs.sync_taskq_batch_pct            75
    vfs.zfs.delay_scale                     500000
    vfs.zfs.dirty_data_sync_percent         20
    vfs.zfs.dirty_data_max_max              4294967296
    vfs.zfs.dirty_data_max                  3417651609
    vfs.zfs.delay_min_dirty_percent         60
    vfs.zfs.dirty_data_max_max_percent      25
    vfs.zfs.dirty_data_max_percent          10
    vfs.zfs.disable_ivset_guid_check        0
    vfs.zfs.allow_redacted_dataset_mount    0
    vfs.zfs.max_recordsize                  1048576
    vfs.zfs.send_holes_without_birth_time   1
    vfs.zfs.traverse_indirect_prefetch_limit32
    vfs.zfs.pd_bytes_max                    52428800
    vfs.zfs.dmu_object_alloc_chunk_shift    7
    vfs.zfs.dmu_prefetch_max                134217728
    vfs.zfs.dmu_offset_next_sync            0
    vfs.zfs.per_txg_dirty_frees_percent     5
    vfs.zfs.nopwrite_enabled                1
    vfs.zfs.dbuf_state_index                0
    vfs.zfs.arc_free_target                 172966
    vfs.zfs.compressed_arc_enabled          1
    vfs.zfs.max_dataset_nesting             50
    vfs.zfs.fletcher_4_impl                 [fastest] scalar superscalar superscalar4 sse2 ssse3 avx2
    vfs.zfs.vol.unmap_enabled               1
    vfs.zfs.vol.recursive                   0
    vfs.zfs.vol.mode                        1
    vfs.zfs.version.zpl                     5
    vfs.zfs.version.spa                     5000
    vfs.zfs.version.acl                     1
    vfs.zfs.version.module                  2.1.4-FreeBSD_g52bad4f23
    vfs.zfs.version.ioctl                   15
    vfs.zfs.debug                           0
    vfs.zfs.super_owner                     0
    vfs.zfs.immediate_write_sz              32768
    vfs.zfs.top_maxinflight                 1000
    vfs.zfs.validate_skip                   0
    vfs.zfs.standard_sm_blksz               131072
    vfs.zfs.dtl_sm_blksz                    4096
    vfs.zfs.max_auto_ashift                 16
    vfs.zfs.min_auto_ashift                 12
    vfs.zfs.space_map_ibs                   14
    vfs.zfs.debugflags                      0
    vfs.zfs.max_missing_tvds_scan           0
    vfs.zfs.max_missing_tvds_cachefile      2
    vfs.zfs.ccw_retry_interval              300
    vfs.zfs.removal_suspend_progress        0
    vfs.zfs.remove_max_segment              16777216
    vfs.zfs.condense_pct                    200
    vfs.zfs.default_ibs                     17
    vfs.zfs.default_bs                      9
    vfs.zfs.zfetch.max_idistance            67108864
    vfs.zfs.zfetch.max_distance             8388608
    vfs.zfs.arc_max                         0
    vfs.zfs.arc_min                         0
    vfs.zfs.arc_no_grow_shift               5
    vfs.zfs.l2c_only_size                   0
    vfs.zfs.mfu_ghost_data_esize            10203162112
    vfs.zfs.mfu_ghost_metadata_esize        3488372224
    vfs.zfs.mfu_ghost_size                  13691534336
    vfs.zfs.mfu_data_esize                  11065179648
    vfs.zfs.mfu_metadata_esize              4865536
    vfs.zfs.mfu_size                        12278049792
    vfs.zfs.mru_ghost_data_esize            12991088640
    vfs.zfs.mru_ghost_metadata_esize        427904000
    vfs.zfs.mru_ghost_size                  13418992640
    vfs.zfs.mru_data_esize                  12532664832
    vfs.zfs.mru_metadata_esize              3477504
    vfs.zfs.mru_size                        13696397312
    vfs.zfs.anon_data_esize                 0
    vfs.zfs.anon_metadata_esize             0
    vfs.zfs.anon_size                       5551104
    vfs.zfs.l2arc_norw                      0
    vfs.zfs.l2arc_feed_again                1
    vfs.zfs.l2arc_noprefetch                1
    vfs.zfs.l2arc_feed_min_ms               200
    vfs.zfs.l2arc_feed_secs                 1
    vfs.zfs.l2arc_headroom                  2
    vfs.zfs.l2arc_write_boost               8388608
    vfs.zfs.l2arc_write_max                 8388608
    vfs.zfs.zio.deadman_log_all             0
    vfs.zfs.zio.dva_throttle_enabled        1
    vfs.zfs.zio.requeue_io_start_cut_in_line1
    vfs.zfs.zio.slow_io_ms                  30000
    vfs.zfs.zio.taskq_batch_tpq             0
    vfs.zfs.zio.taskq_batch_pct             80
    vfs.zfs.zio.exclude_metadata            0
    vfs.zfs.zio.use_uma                     1
    vfs.zfs.zil.maxblocksize                131072
    vfs.zfs.zil.slog_bulk                   786432
    vfs.zfs.zil.nocacheflush                0
    vfs.zfs.zil.replay_disable              0
    vfs.zfs.zil.clean_taskq_maxalloc        1048576
    vfs.zfs.zil.clean_taskq_minalloc        1024
    vfs.zfs.zil.clean_taskq_nthr_pct        100
    vfs.zfs.zevent.retain_expire_secs       900
    vfs.zfs.zevent.retain_max               2000
    vfs.zfs.zevent.len_max                  512
    vfs.zfs.vnops.read_chunk_size           1048576
    vfs.zfs.vdev.removal_suspend_progress   0
    vfs.zfs.vdev.removal_max_span           32768
    vfs.zfs.vdev.remove_max_segment         16777216
    vfs.zfs.vdev.removal_ignore_errors      0
    vfs.zfs.vdev.queue_depth_pct            1000
    vfs.zfs.vdev.nia_delay                  5
    vfs.zfs.vdev.nia_credit                 5
    vfs.zfs.vdev.rebuild_min_active         1
    vfs.zfs.vdev.rebuild_max_active         3
    vfs.zfs.vdev.trim_min_active            1
    vfs.zfs.vdev.trim_max_active            2
    vfs.zfs.vdev.sync_write_min_active      10
    vfs.zfs.vdev.sync_write_max_active      10
    vfs.zfs.vdev.sync_read_min_active       10
    vfs.zfs.vdev.sync_read_max_active       10
    vfs.zfs.vdev.scrub_min_active           1
    vfs.zfs.vdev.scrub_max_active           3
    vfs.zfs.vdev.removal_min_active         1
    vfs.zfs.vdev.removal_max_active         2
    vfs.zfs.vdev.initializing_min_active    1
    vfs.zfs.vdev.initializing_max_active    1
    vfs.zfs.vdev.async_write_min_active     2
    vfs.zfs.vdev.async_write_max_active     10
    vfs.zfs.vdev.async_read_min_active      1
    vfs.zfs.vdev.async_read_max_active      3
    vfs.zfs.vdev.async_write_active_min_dirty_percent30
    vfs.zfs.vdev.async_write_active_max_dirty_percent60
    vfs.zfs.vdev.max_active                 1000
    vfs.zfs.vdev.write_gap_limit            4096
    vfs.zfs.vdev.read_gap_limit             32768
    vfs.zfs.vdev.aggregate_trim             0
    vfs.zfs.vdev.aggregation_limit_non_rotating131072
    vfs.zfs.vdev.aggregation_limit          1048576
    vfs.zfs.vdev.cache_bshift               16
    vfs.zfs.vdev.cache_size                 0
    vfs.zfs.vdev.cache_max                  16384
    vfs.zfs.vdev.max_auto_ashift            16
    vfs.zfs.vdev.min_auto_ashift            12
    vfs.zfs.vdev.validate_skip              0
    vfs.zfs.vdev.ms_count_limit             131072
    vfs.zfs.vdev.min_ms_count               16
    vfs.zfs.vdev.default_ms_shift           29
    vfs.zfs.vdev.default_ms_count           200
    vfs.zfs.vdev.bio_delete_disable         0
    vfs.zfs.vdev.bio_flush_disable          0
    vfs.zfs.vdev.def_queue_depth            32
    vfs.zfs.vdev.mirror.non_rotating_seek_inc1
    vfs.zfs.vdev.mirror.non_rotating_inc    0
    vfs.zfs.vdev.mirror.rotating_seek_offset1048576
    vfs.zfs.vdev.mirror.rotating_seek_inc   5
    vfs.zfs.vdev.mirror.rotating_inc        0
    vfs.zfs.vdev.file.physical_ashift       9
    vfs.zfs.vdev.file.logical_ashift        9
    vfs.zfs.txg.timeout                     5
    vfs.zfs.txg.history                     100
    vfs.zfs.trim.queue_limit                10
    vfs.zfs.trim.txg_batch                  32
    vfs.zfs.trim.metaslab_skip              0
    vfs.zfs.trim.extent_bytes_min           32768
    vfs.zfs.trim.extent_bytes_max           134217728
    vfs.zfs.spa.slop_shift                  5
    vfs.zfs.spa.asize_inflation             24
    vfs.zfs.spa.discard_memory_limit        16777216
    vfs.zfs.spa.load_print_vdev_tree        0
    vfs.zfs.spa.load_verify_data            1
    vfs.zfs.spa.load_verify_metadata        1
    vfs.zfs.spa.load_verify_shift           4
    vfs.zfs.send.override_estimate_recordsize0
    vfs.zfs.send.no_prefetch_queue_ff       20
    vfs.zfs.send.queue_ff                   20
    vfs.zfs.send.no_prefetch_queue_length   1048576
    vfs.zfs.send.unmodified_spill_blocks    1
    vfs.zfs.send.queue_length               16777216
    vfs.zfs.send.corrupt_data               0
    vfs.zfs.recv.write_batch_size           1048576
    vfs.zfs.recv.queue_ff                   20
    vfs.zfs.recv.queue_length               16777216
    vfs.zfs.reconstruct.indirect_combinations_max4096
    vfs.zfs.prefetch.array_rd_sz            1048576
    vfs.zfs.prefetch.max_idistance          67108864
    vfs.zfs.prefetch.max_distance           8388608
    vfs.zfs.prefetch.min_sec_reap           2
    vfs.zfs.prefetch.max_streams            8
    vfs.zfs.prefetch.disable                0
    vfs.zfs.multihost.history               0
    vfs.zfs.multihost.import_intervals      20
    vfs.zfs.multihost.fail_intervals        10
    vfs.zfs.multihost.interval              1000
    vfs.zfs.mg.fragmentation_threshold      95
    vfs.zfs.mg.noalloc_threshold            0
    vfs.zfs.metaslab.find_max_tries         100
    vfs.zfs.metaslab.try_hard_before_gang   0
    vfs.zfs.metaslab.mem_limit              25
    vfs.zfs.metaslab.max_size_cache_sec     3600
    vfs.zfs.metaslab.df_use_largest_segment 0
    vfs.zfs.metaslab.df_max_search          16777216
    vfs.zfs.metaslab.force_ganging          16777217
    vfs.zfs.metaslab.switch_threshold       2
    vfs.zfs.metaslab.segment_weight_enabled 1
    vfs.zfs.metaslab.bias_enabled           1
    vfs.zfs.metaslab.lba_weighting_enabled  1
    vfs.zfs.metaslab.fragmentation_factor_enabled1
    vfs.zfs.metaslab.fragmentation_threshold70
    vfs.zfs.metaslab.unload_delay_ms        600000
    vfs.zfs.metaslab.unload_delay           32
    vfs.zfs.metaslab.preload_enabled        1
    vfs.zfs.metaslab.debug_unload           0
    vfs.zfs.metaslab.debug_load             0
    vfs.zfs.metaslab.aliquot                524288
    vfs.zfs.metaslab.preload_limit          10
    vfs.zfs.metaslab.load_pct               50
    vfs.zfs.metaslab.df_free_pct            4
    vfs.zfs.metaslab.df_alloc_threshold     131072
    vfs.zfs.metaslab.sm_blksz_with_log      131072
    vfs.zfs.metaslab.sm_blksz_no_log        16384
    vfs.zfs.lua.max_memlimit                104857600
    vfs.zfs.lua.max_instrlimit              100000000
    vfs.zfs.livelist.min_percent_shared     75
    vfs.zfs.livelist.max_entries            500000
    vfs.zfs.livelist.condense.new_alloc     0
    vfs.zfs.livelist.condense.zthr_cancel   0
    vfs.zfs.livelist.condense.sync_cancel   0
    vfs.zfs.livelist.condense.sync_pause    0
    vfs.zfs.livelist.condense.zthr_pause    0
    vfs.zfs.l2arc.mfuonly                   0
    vfs.zfs.l2arc.rebuild_blocks_min_l2size 1073741824
    vfs.zfs.l2arc.rebuild_enabled           1
    vfs.zfs.l2arc.meta_percent              33
    vfs.zfs.l2arc.norw                      0
    vfs.zfs.l2arc.feed_again                1
    vfs.zfs.l2arc.noprefetch                1
    vfs.zfs.l2arc.feed_min_ms               200
    vfs.zfs.l2arc.feed_secs                 1
    vfs.zfs.l2arc.trim_ahead                0
    vfs.zfs.l2arc.headroom_boost            200
    vfs.zfs.l2arc.headroom                  2
    vfs.zfs.l2arc.write_boost               8388608
    vfs.zfs.l2arc.write_max                 8388608
    vfs.zfs.dedup.prefetch                  0
    vfs.zfs.deadman.ziotime_ms              300000
    vfs.zfs.deadman.synctime_ms             600000
    vfs.zfs.deadman.failmode                wait
    vfs.zfs.deadman.enabled                 1
    vfs.zfs.deadman.checktime_ms            60000
    vfs.zfs.dbuf_cache.lowater_pct          10
    vfs.zfs.dbuf_cache.hiwater_pct          10
    vfs.zfs.dbuf_cache.max_bytes            -1
    vfs.zfs.dbuf.metadata_cache_shift       6
    vfs.zfs.dbuf.cache_shift                5
    vfs.zfs.dbuf.metadata_cache_max_bytes   -1
    vfs.zfs.condense.indirect_commit_entry_delay_ms0
    vfs.zfs.condense.max_obsolete_bytes     1073741824
    vfs.zfs.condense.min_mapping_bytes      131072
    vfs.zfs.condense.indirect_obsolete_pct  25
    vfs.zfs.condense.indirect_vdevs_enable  1
    vfs.zfs.arc.prune_task_threads          1
    vfs.zfs.arc.evict_batch_limit           10
    vfs.zfs.arc.eviction_pct                200
    vfs.zfs.arc.dnode_reduce_percent        10
    vfs.zfs.arc.dnode_limit_percent         10
    vfs.zfs.arc.dnode_limit                 0
    vfs.zfs.arc.sys_free                    0
    vfs.zfs.arc.lotsfree_percent            10
    vfs.zfs.arc.min_prescient_prefetch_ms   0
    vfs.zfs.arc.min_prefetch_ms             0
    vfs.zfs.arc.average_blocksize           8192
    vfs.zfs.arc.p_min_shift                 0
    vfs.zfs.arc.pc_percent                  0
    vfs.zfs.arc.shrink_shift                0
    vfs.zfs.arc.p_dampener_disable          1
    vfs.zfs.arc.grow_retry                  0
    vfs.zfs.arc.meta_strategy               1
    vfs.zfs.arc.meta_adjust_restarts        4096
    vfs.zfs.arc.meta_prune                  10000
    vfs.zfs.arc.meta_min                    0
    vfs.zfs.arc.meta_limit_percent          75
    vfs.zfs.arc.meta_limit                  0
    vfs.zfs.arc.max                         0
    vfs.zfs.arc.min                         0
    vfs.zfs.crypt_sessions                  0
    vfs.zfs.abd_scatter_min_size            4097
    vfs.zfs.abd_scatter_enabled             1

------------------------------------------------------------------------
 
What makes you think your system needs more SWAP space?

Memory Throttle Count is 0, so at no time (since last reboot) there was any event where memory was scarce and ZFS had to reduce the ARC.

As said: if you actually have regular OOM events, install more RAM, not swap.
 
This is
Code:
$ swapinfo -h
Device              Size     Used    Avail Capacity
/dev/ada0p3         2.0G     1.3G     735M    64%
Okay. 2G is not enough on a 32G machine when it runs tasks with big and random memory usage patterns, like bhyve. (Mine is happily eating up to 6, 8, even 12 GB, and still running decently.)
Maybe adding a mirror for the root disk wouldn't be a bad idea anyway? On that occasion one could nicely redesign the layout.
 
Back
Top