… which is it? …
The manual page appears to be more recent.
{link removed}
Last edited:
… which is it? …
… OpenZFS now uses a connecting "dot" instead of a connecting underscore: …
.
for consistency_
for legacy.No problem. Documentation could indeed do with some updates, thanks for reporting that.Sorry, I could have been clearer.
… We have lostvfs.zfs.arc_free_target
as a tunable. It seems to have gone underground: …
% zfs version
zfs-2.1.99-FreeBSD_g17b2ae0b2
zfs-kmod-2.1.99-FreeBSD_g17b2ae0b2
% sysctl vfs.zfs.arc.sys_free
vfs.zfs.arc.sys_free: 0
% sysctl vfs.zfs.arc_free_target
vfs.zfs.arc_free_target: 86267
% sudo sysctl vfs.zfs.arc.sys_free=100000
grahamperrin's password:
vfs.zfs.arc.sys_free: 0 -> 100000
% sudo sysctl vfs.zfs.arc.sys_free=0
vfs.zfs.arc.sys_free: 100000 -> 0
% sudo sysctl vfs.zfs.arc_free_target=256000
vfs.zfs.arc_free_target: 86267 -> 256000
% sudo sysctl vfs.zfs.arc_free_target=86267
vfs.zfs.arc_free_target: 256000 -> 86267
% uname -aKU
FreeBSD mowa219-gjp4-8570p-freebsd 14.0-CURRENT FreeBSD 14.0-CURRENT #5 main-n253627-25375b1415f-dirty: Sat Mar 5 14:21:40 GMT 2022 root@mowa219-gjp4-8570p-freebsd:/usr/obj/usr/src/amd64.amd64/sys/GENERIC-NODEBUG amd64 1400053 1400053
%
<{link removed}> (2020-04-15) began:
/* * We don't have a tunable for arc_free_target due to the dependency on * pagedaemon initialisation. */
Allan or anyone: please, is that comment redundant?
<https://forums.freebsd.org/posts/558971> if I'm not mistaken, there's tuning.
tunable
is a noun; "… basically a special type of sysctl that gets its initial value from the kernel environment (set by loader)".… cannot be set by the loader (=it cannot be set in /boot/loader.conf).
Is there any way to confirm that whether they are kernel-tunables or sysctl-variables?
Yes, it is a flag question, you could install sysutils/nsysctl (>= 1.1) [1]:
% nsysctl -aNG | grep elantech
you can read the comments of sys/sysctl.h for a description of the flags (if you like a GUI: deskutils/sysctlview [2] has a window for the flags and Help->Flags for a description)
[1] nsysctl tutorial
[2] sysctlview screenshots
vfs.zfs.arc_free_target
near the head of this list:% nsysctl -NG vfs.zfs | grep -v \ TUN | sort
vfs.zfs.anon_data_esize: RD MPSAFE
vfs.zfs.anon_metadata_esize: RD MPSAFE
vfs.zfs.anon_size: RD MPSAFE
vfs.zfs.arc_free_target: RD WR RW MPSAFE
vfs.zfs.crypt_sessions: RD MPSAFE
vfs.zfs.l2arc_feed_again: RD WR RW MPSAFE
vfs.zfs.l2arc_feed_min_ms: RD WR RW MPSAFE
vfs.zfs.l2arc_feed_secs: RD WR RW MPSAFE
vfs.zfs.l2arc_headroom: RD WR RW MPSAFE
vfs.zfs.l2arc_noprefetch: RD WR RW MPSAFE
vfs.zfs.l2arc_norw: RD WR RW MPSAFE
vfs.zfs.l2arc_write_boost: RD WR RW MPSAFE
vfs.zfs.l2arc_write_max: RD WR RW MPSAFE
vfs.zfs.l2c_only_size: RD MPSAFE
vfs.zfs.mfu_data_esize: RD MPSAFE
vfs.zfs.mfu_ghost_data_esize: RD MPSAFE
vfs.zfs.mfu_ghost_metadata_esize: RD MPSAFE
vfs.zfs.mfu_ghost_size: RD MPSAFE
vfs.zfs.mfu_metadata_esize: RD MPSAFE
vfs.zfs.mfu_size: RD MPSAFE
vfs.zfs.mru_data_esize: RD MPSAFE
vfs.zfs.mru_ghost_data_esize: RD MPSAFE
vfs.zfs.mru_ghost_metadata_esize: RD MPSAFE
vfs.zfs.mru_ghost_size: RD MPSAFE
vfs.zfs.mru_metadata_esize: RD MPSAFE
vfs.zfs.mru_size: RD MPSAFE
vfs.zfs.super_owner: RD WR RW MPSAFE
vfs.zfs.vdev.cache: RD WR RW
vfs.zfs.version.acl: RD MPSAFE
vfs.zfs.version.ioctl: RD MPSAFE
vfs.zfs.version.module: RD MPSAFE
vfs.zfs.version.spa: RD MPSAFE
vfs.zfs.version.zpl: RD MPSAFE
%
… The scrub time came down to 5 hours. It was, to the best of my recollection, up around around 12 hours.
So fragmentation probably matters…
… A scrub is split into two parts: metadata scanning and block scrubbing. The metadata scanning sorts blocks into large sequential ranges which can then be read much more efficiently from disk when issuing the scrub I/O. …
No, I had similar problems on the new pool. This is why I dug into other configuration options and came up with restricting the default arc size.Interesting its been reported performance recovers after recreating the pool, a likely explanation for that is better fragmentat
Nor would I. However, the original tank was in continuous service for 8 years. It had 6 spindles in RAID-Z1 configuration. This is sub-optimal, and 7 spindles (which is what I went to) is technically better. But the general advice is that when you turn on compression (and I did), the spindle count advantage is diminished.I should not expect fragmentation of files, alone, to have so extreme an effect on scrub (of pool metadata and blocks).
# Max. 4 GB arc Size:
vfs.zfs.arc_max="4294967296"