Lovely isn't? =)
The cause is /etc/periodic/security/100.chksetuid
I think it to be stressing ZFS too much, that goes over the memory limits and crashes. Tough, I don't know.
I have 256GB of ram, but since that I have a SSD subsystem, I want to use the most of it for my apps, not for ZFS (that i'm using, on a partition, /x/, that is 2TB in size).
So i have this /x/ partition, - 2TB raid 10 - contains non-system files, like databases, php scripts, etc. It will also contain about 100 million of little files. Disabling periodic checks on the partition would most probably fix the problem. However i see it only as a workaround, while it's still possible that crashes happens. How can i set my system ZFS's limits to have a stable system, while using as less ram as possible?
Also, this just happened (after a server crash done to check that, in fact, that script causes the reboot)
Some infos:
The cause is /etc/periodic/security/100.chksetuid
I think it to be stressing ZFS too much, that goes over the memory limits and crashes. Tough, I don't know.
I have 256GB of ram, but since that I have a SSD subsystem, I want to use the most of it for my apps, not for ZFS (that i'm using, on a partition, /x/, that is 2TB in size).
So i have this /x/ partition, - 2TB raid 10 - contains non-system files, like databases, php scripts, etc. It will also contain about 100 million of little files. Disabling periodic checks on the partition would most probably fix the problem. However i see it only as a workaround, while it's still possible that crashes happens. How can i set my system ZFS's limits to have a stable system, while using as less ram as possible?
Also, this just happened (after a server crash done to check that, in fact, that script causes the reboot)
Some infos:
Code:
cat /boot/loader.conf
zfs_load="YES"
vfs.root.mountfrom="zfs:zroot"
vm.kmem_size="2000M"
vm.kmem_size_max="2000M"
vfs.zfs.arc_max="1000M"
Code:
zfs-stats -a
------------------------------------------------------------------------
ZFS Subsystem Report Fri Mar 16 14:58:22 2012
------------------------------------------------------------------------
System Information:
Kernel Version: 900044 (osreldate)
Hardware Platform: amd64
Processor Architecture: amd64
ZFS Storage pool Version: 28
ZFS Filesystem Version: 5
FreeBSD 9.0-RELEASE #0: Tue Jan 3 07:46:30 UTC 2012 root
2:58PM up 7 mins, 1 user, load averages: 20.66, 12.90, 6.12
------------------------------------------------------------------------
System Memory:
7.03% 17.47 GiB Active, 0.51% 1.27 GiB Inact
1.31% 3.24 GiB Wired, 0.00% 268.00 KiB Cache
91.15% 226.44 GiB Free, 0.00% 652.00 KiB Gap
Real Installed: 256.00 GiB
Real Available: 99.99% 255.96 GiB
Real Managed: 97.06% 248.43 GiB
Logical Total: 256.00 GiB
Logical Used: 11.05% 28.29 GiB
Logical Free: 88.95% 227.71 GiB
Kernel Memory: 1.63 GiB
Data: 98.80% 1.61 GiB
Text: 1.20% 20.01 MiB
Kernel Memory Map: 1.58 GiB
Size: 87.93% 1.39 GiB
Free: 12.07% 195.74 MiB
------------------------------------------------------------------------
ARC Summary: (HEALTHY)
Memory Throttle Count: 0
ARC Misc:
Deleted: 397.90k
Recycle Misses: 133.44k
Mutex Misses: 3.60k
Evict Skips: 3.60k
ARC Size: 96.87% 968.74 MiB
Target Size: (Adaptive) 96.88% 968.75 MiB
Min Size (Hard Limit): 12.50% 125.00 MiB
Max Size (High Water): 8:1 1000.00 MiB
ARC Size Breakdown:
Recently Used Cache Size: 93.74% 908.13 MiB
Frequently Used Cache Size: 6.26% 60.62 MiB
ARC Hash Breakdown:
Elements Max: 113.13k
Elements Current: 85.53% 96.76k
Collisions: 13.22k
Chain Max: 3
Chains: 1.14k
------------------------------------------------------------------------
ARC Efficiency: 2.11m
Cache Hit Ratio: 77.04% 1.63m
Cache Miss Ratio: 22.96% 484.98k
Actual Hit Ratio: 52.04% 1.10m
Data Demand Efficiency: 86.08% 609.07k
Data Prefetch Efficiency: 32.07% 43.82k
CACHE HITS BY CACHE LIST:
Anonymously Used: 28.84% 469.29k
Most Recently Used: 29.93% 486.97k
Most Frequently Used: 37.62% 612.09k
Most Recently Used Ghost: 0.40% 6.47k
Most Frequently Used Ghost: 3.21% 52.25k
CACHE HITS BY DATA TYPE:
Demand Data: 32.22% 524.29k
Prefetch Data: 0.86% 14.05k
Demand Metadata: 35.27% 573.89k
Prefetch Metadata: 31.64% 514.85k
CACHE MISSES BY DATA TYPE:
Demand Data: 17.48% 84.78k
Prefetch Data: 6.14% 29.77k
Demand Metadata: 18.56% 90.03k
Prefetch Metadata: 57.82% 280.39k
------------------------------------------------------------------------
L2ARC is disabled
------------------------------------------------------------------------
File-Level Prefetch: (HEALTHY)
DMU Efficiency: 5.36m
Hit Ratio: 58.31% 3.13m
Miss Ratio: 41.69% 2.23m
Colinear: 2.23m
Hit Ratio: 0.01% 151
Miss Ratio: 99.99% 2.23m
Stride: 3.00m
Hit Ratio: 99.57% 2.99m
Miss Ratio: 0.43% 12.77k
DMU Misc:
Reclaim: 2.23m
Successes: 0.21% 4.68k
Failures: 99.79% 2.23m
Streams: 99.41k
+Resets: 0.15% 152
-Resets: 99.85% 99.26k
Bogus: 0
------------------------------------------------------------------------
VDEV cache is disabled
------------------------------------------------------------------------
ZFS Tunables (sysctl):
kern.maxusers 384
vm.kmem_size 2097152000
vm.kmem_size_scale 1
vm.kmem_size_min 0
vm.kmem_size_max 2097152000
vfs.zfs.l2c_only_size 0
vfs.zfs.mfu_ghost_data_lsize 30994432
vfs.zfs.mfu_ghost_metadata_lsize 854389248
vfs.zfs.mfu_ghost_size 885383680
vfs.zfs.mfu_data_lsize 144384
vfs.zfs.mfu_metadata_lsize 20480
vfs.zfs.mfu_size 15068160
vfs.zfs.mru_ghost_data_lsize 52230144
vfs.zfs.mru_ghost_metadata_lsize 6546432
vfs.zfs.mru_ghost_size 58776576
vfs.zfs.mru_data_lsize 460544000
vfs.zfs.mru_metadata_lsize 32663552
vfs.zfs.mru_size 860461056
vfs.zfs.anon_data_lsize 0
vfs.zfs.anon_metadata_lsize 0
vfs.zfs.anon_size 23183360
vfs.zfs.l2arc_norw 1
vfs.zfs.l2arc_feed_again 1
vfs.zfs.l2arc_noprefetch 1
vfs.zfs.l2arc_feed_min_ms 200
vfs.zfs.l2arc_feed_secs 1
vfs.zfs.l2arc_headroom 2
vfs.zfs.l2arc_write_boost 8388608
vfs.zfs.l2arc_write_max 8388608
vfs.zfs.arc_meta_limit 262144000
vfs.zfs.arc_meta_used 527385776
vfs.zfs.arc_min 131072000
vfs.zfs.arc_max 1048576000
vfs.zfs.dedup.prefetch 1
vfs.zfs.mdcomp_disable 0
vfs.zfs.write_limit_override 0
vfs.zfs.write_limit_inflated 824514318336
vfs.zfs.write_limit_max 34354763264
vfs.zfs.write_limit_min 33554432
vfs.zfs.write_limit_shift 3
vfs.zfs.no_write_throttle 0
vfs.zfs.zfetch.array_rd_sz 1048576
vfs.zfs.zfetch.block_cap 256
vfs.zfs.zfetch.min_sec_reap 2
vfs.zfs.zfetch.max_streams 8
vfs.zfs.prefetch_disable 0
vfs.zfs.mg_alloc_failures 48
vfs.zfs.check_hostid 1
vfs.zfs.recover 0
vfs.zfs.txg.synctime_ms 1000
vfs.zfs.txg.timeout 5
vfs.zfs.scrub_limit 10
vfs.zfs.vdev.cache.bshift 16
vfs.zfs.vdev.cache.size 0
vfs.zfs.vdev.cache.max 16384
vfs.zfs.vdev.write_gap_limit 4096
vfs.zfs.vdev.read_gap_limit 32768
vfs.zfs.vdev.aggregation_limit 131072
vfs.zfs.vdev.ramp_rate 2
vfs.zfs.vdev.time_shift 6
vfs.zfs.vdev.min_pending 4
vfs.zfs.vdev.max_pending 10
vfs.zfs.vdev.bio_flush_disable 0
vfs.zfs.cache_flush_disable 0
vfs.zfs.zil_replay_disable 0
vfs.zfs.zio.use_uma 0
vfs.zfs.version.zpl 5
vfs.zfs.version.spa 28
vfs.zfs.version.acl 1
vfs.zfs.debug 0
vfs.zfs.super_owner 0
------------------------------------------------------------------------