Server Crashes [3:00AM]

Lovely isn't? =)

The cause is /etc/periodic/security/100.chksetuid

I think it to be stressing ZFS too much, that goes over the memory limits and crashes. Tough, I don't know.

I have 256GB of ram, but since that I have a SSD subsystem, I want to use the most of it for my apps, not for ZFS (that i'm using, on a partition, /x/, that is 2TB in size).

So i have this /x/ partition, - 2TB raid 10 - contains non-system files, like databases, php scripts, etc. It will also contain about 100 million of little files. Disabling periodic checks on the partition would most probably fix the problem. However i see it only as a workaround, while it's still possible that crashes happens. How can i set my system ZFS's limits to have a stable system, while using as less ram as possible?

Also, this just happened (after a server crash done to check that, in fact, that script causes the reboot)

A1QNU.png


Some infos:

Code:
cat /boot/loader.conf
zfs_load="YES"
vfs.root.mountfrom="zfs:zroot"

vm.kmem_size="2000M"
vm.kmem_size_max="2000M"
vfs.zfs.arc_max="1000M"

Code:
zfs-stats -a

------------------------------------------------------------------------
ZFS Subsystem Report				Fri Mar 16 14:58:22 2012
------------------------------------------------------------------------

System Information:

	Kernel Version:				900044 (osreldate)
	Hardware Platform:			amd64
	Processor Architecture:			amd64

	ZFS Storage pool Version:		28
	ZFS Filesystem Version:			5

FreeBSD 9.0-RELEASE #0: Tue Jan 3 07:46:30 UTC 2012 root
 2:58PM  up 7 mins, 1 user, load averages: 20.66, 12.90, 6.12

------------------------------------------------------------------------

System Memory:

	7.03%	17.47	GiB Active,	0.51%	1.27	GiB Inact
	1.31%	3.24	GiB Wired,	0.00%	268.00	KiB Cache
	91.15%	226.44	GiB Free,	0.00%	652.00	KiB Gap

	Real Installed:				256.00	GiB
	Real Available:			99.99%	255.96	GiB
	Real Managed:			97.06%	248.43	GiB

	Logical Total:				256.00	GiB
	Logical Used:			11.05%	28.29	GiB
	Logical Free:			88.95%	227.71	GiB

Kernel Memory:					1.63	GiB
	Data:				98.80%	1.61	GiB
	Text:				1.20%	20.01	MiB

Kernel Memory Map:				1.58	GiB
	Size:				87.93%	1.39	GiB
	Free:				12.07%	195.74	MiB

------------------------------------------------------------------------

ARC Summary: (HEALTHY)
	Memory Throttle Count:			0

ARC Misc:
	Deleted:				397.90k
	Recycle Misses:				133.44k
	Mutex Misses:				3.60k
	Evict Skips:				3.60k

ARC Size:				96.87%	968.74	MiB
	Target Size: (Adaptive)		96.88%	968.75	MiB
	Min Size (Hard Limit):		12.50%	125.00	MiB
	Max Size (High Water):		8:1	1000.00	MiB

ARC Size Breakdown:
	Recently Used Cache Size:	93.74%	908.13	MiB
	Frequently Used Cache Size:	6.26%	60.62	MiB

ARC Hash Breakdown:
	Elements Max:				113.13k
	Elements Current:		85.53%	96.76k
	Collisions:				13.22k
	Chain Max:				3
	Chains:					1.14k

------------------------------------------------------------------------

ARC Efficiency:					2.11m
	Cache Hit Ratio:		77.04%	1.63m
	Cache Miss Ratio:		22.96%	484.98k
	Actual Hit Ratio:		52.04%	1.10m

	Data Demand Efficiency:		86.08%	609.07k
	Data Prefetch Efficiency:	32.07%	43.82k

	CACHE HITS BY CACHE LIST:
	  Anonymously Used:		28.84%	469.29k
	  Most Recently Used:		29.93%	486.97k
	  Most Frequently Used:		37.62%	612.09k
	  Most Recently Used Ghost:	0.40%	6.47k
	  Most Frequently Used Ghost:	3.21%	52.25k

	CACHE HITS BY DATA TYPE:
	  Demand Data:			32.22%	524.29k
	  Prefetch Data:		0.86%	14.05k
	  Demand Metadata:		35.27%	573.89k
	  Prefetch Metadata:		31.64%	514.85k

	CACHE MISSES BY DATA TYPE:
	  Demand Data:			17.48%	84.78k
	  Prefetch Data:		6.14%	29.77k
	  Demand Metadata:		18.56%	90.03k
	  Prefetch Metadata:		57.82%	280.39k

------------------------------------------------------------------------

L2ARC is disabled

------------------------------------------------------------------------

File-Level Prefetch: (HEALTHY)

DMU Efficiency:					5.36m
	Hit Ratio:			58.31%	3.13m
	Miss Ratio:			41.69%	2.23m

	Colinear:				2.23m
	  Hit Ratio:			0.01%	151
	  Miss Ratio:			99.99%	2.23m

	Stride:					3.00m
	  Hit Ratio:			99.57%	2.99m
	  Miss Ratio:			0.43%	12.77k

DMU Misc:
	Reclaim:				2.23m
	  Successes:			0.21%	4.68k
	  Failures:			99.79%	2.23m

	Streams:				99.41k
	  +Resets:			0.15%	152
	  -Resets:			99.85%	99.26k
	  Bogus:				0

------------------------------------------------------------------------

VDEV cache is disabled

------------------------------------------------------------------------

ZFS Tunables (sysctl):
	kern.maxusers                           384
	vm.kmem_size                            2097152000
	vm.kmem_size_scale                      1
	vm.kmem_size_min                        0
	vm.kmem_size_max                        2097152000
	vfs.zfs.l2c_only_size                   0
	vfs.zfs.mfu_ghost_data_lsize            30994432
	vfs.zfs.mfu_ghost_metadata_lsize        854389248
	vfs.zfs.mfu_ghost_size                  885383680
	vfs.zfs.mfu_data_lsize                  144384
	vfs.zfs.mfu_metadata_lsize              20480
	vfs.zfs.mfu_size                        15068160
	vfs.zfs.mru_ghost_data_lsize            52230144
	vfs.zfs.mru_ghost_metadata_lsize        6546432
	vfs.zfs.mru_ghost_size                  58776576
	vfs.zfs.mru_data_lsize                  460544000
	vfs.zfs.mru_metadata_lsize              32663552
	vfs.zfs.mru_size                        860461056
	vfs.zfs.anon_data_lsize                 0
	vfs.zfs.anon_metadata_lsize             0
	vfs.zfs.anon_size                       23183360
	vfs.zfs.l2arc_norw                      1
	vfs.zfs.l2arc_feed_again                1
	vfs.zfs.l2arc_noprefetch                1
	vfs.zfs.l2arc_feed_min_ms               200
	vfs.zfs.l2arc_feed_secs                 1
	vfs.zfs.l2arc_headroom                  2
	vfs.zfs.l2arc_write_boost               8388608
	vfs.zfs.l2arc_write_max                 8388608
	vfs.zfs.arc_meta_limit                  262144000
	vfs.zfs.arc_meta_used                   527385776
	vfs.zfs.arc_min                         131072000
	vfs.zfs.arc_max                         1048576000
	vfs.zfs.dedup.prefetch                  1
	vfs.zfs.mdcomp_disable                  0
	vfs.zfs.write_limit_override            0
	vfs.zfs.write_limit_inflated            824514318336
	vfs.zfs.write_limit_max                 34354763264
	vfs.zfs.write_limit_min                 33554432
	vfs.zfs.write_limit_shift               3
	vfs.zfs.no_write_throttle               0
	vfs.zfs.zfetch.array_rd_sz              1048576
	vfs.zfs.zfetch.block_cap                256
	vfs.zfs.zfetch.min_sec_reap             2
	vfs.zfs.zfetch.max_streams              8
	vfs.zfs.prefetch_disable                0
	vfs.zfs.mg_alloc_failures               48
	vfs.zfs.check_hostid                    1
	vfs.zfs.recover                         0
	vfs.zfs.txg.synctime_ms                 1000
	vfs.zfs.txg.timeout                     5
	vfs.zfs.scrub_limit                     10
	vfs.zfs.vdev.cache.bshift               16
	vfs.zfs.vdev.cache.size                 0
	vfs.zfs.vdev.cache.max                  16384
	vfs.zfs.vdev.write_gap_limit            4096
	vfs.zfs.vdev.read_gap_limit             32768
	vfs.zfs.vdev.aggregation_limit          131072
	vfs.zfs.vdev.ramp_rate                  2
	vfs.zfs.vdev.time_shift                 6
	vfs.zfs.vdev.min_pending                4
	vfs.zfs.vdev.max_pending                10
	vfs.zfs.vdev.bio_flush_disable          0
	vfs.zfs.cache_flush_disable             0
	vfs.zfs.zil_replay_disable              0
	vfs.zfs.zio.use_uma                     0
	vfs.zfs.version.zpl                     5
	vfs.zfs.version.spa                     28
	vfs.zfs.version.acl                     1
	vfs.zfs.debug                           0
	vfs.zfs.super_owner                     0

------------------------------------------------------------------------
 
Why, oh why, oh why, are people messing with kmem_size on systems with tonnes of RAM? You're just shooting yourself in the foot! You have 256 GB of RAM, meaning the kernel has a minimum of 256 GB of kmem. Why in the world would you limit that to 2 GB? Of course you're going to panic the system, you are artificially limiting the kernel to under 1% of available memory space!

Remove all traces of kmem from /boot/loader.conf. And set your vfs.zfs.arc_max to something reasonable, like 128 GB. Limiting the cache to 1 GB out of 256 GB is ludicrous.
 
As far as I know the values of the kmem tunables refer to virtual memory because the kernel uses a virtual address space except for some very low level operations where physical memory addresses have to be used. There for the size of the memory address space can be easily set to let's say twice the amount of the physical memory and that's actually better than setting it too low, too low values would cause problems with memory fragmentation (and eventually a crash when all the memory is in unusable small pieces).
 
Phoenix, I need that RAM for other things (innodb & sphinx caching). I don't want to use the zfs_arc cache at all.

I need to find a lower value for that. 128GB is too much. Also, I don't want kmem to eat up all the rest of the RAM - that will bring to crashes if I need more RAM than what I have.
 
You seem to not understand what kmem is. It's not "memory used for the kernel". It's "virtual address space use by the kernel to keep track of various things, like allowing applications to use memory". It's extremely important to have a large kmem_size. It's not physical RAM, it's virtual address space.

By default, on a 64-bit system with even 1 GB of RAM, kmem_size is 64 GB. There's no real relationship between actual physical RAM in the system and the amount of kmem. DO NOT LIMIT THIS ARTIFICIALLY UNLESS YOU KNOW WHAT YOU ARE DOING!

Maybe 128 GB of ARC is too much for your system, but limiting it to 1 GB is ridiculous! Set it 16 GB. You are cutting the hamstrings of ZFS by limiting the ARC. If you don't want a lot of data to be cached in the ARC, then set primarycache=metadata on the pool.

If you aren't going to use the ARC, though, then why even bother with ZFS? ZFS performance depends on the ARC for so many things (ordering writes, caching reads, minimising fragmentation, etc) that to limit it so severely is just ... there are no words.
 
Back
Top