Solved Memory leak on FreeBSD 9.3 and FreeBSD 10?

Ok, so the compression should not be the one causing it.

Today the problem is getting worse, it starts reducing my ARC cache for system use. (previously was 4 GB Max)

Code:
Mem: 25M Active, 3684K Inact, 7474M Wired, 8668K Cache, 370M Free
ARC: 3775M Total, 2606M MFU, 102M MRU, 19M Anon, 1172M Header, 22M Other
Swap: 8192M Total, 18M Used, 8173M Free
 
The NFS stopped working today, it has happened three times within ten days. MAX ARC size dropped to 3.5GB instead of 4GB.

Code:
Mem: 12M Active, 900K Inact, 7504M Wired, 10M Cache, 353M Free
ARC: 3542M Total, 2124M MFU, 277M MRU, 12M Anon, 1270M Header, 18M Other
Swap: 8192M Total, 19M Used, 8173M Free
 
But it keeps reducing and eventually will hit the minimum amount of ARC we have allocated. Where does the RAM go?

The similar issue did not happen on FreeBSD 9.1 previously before I switched to FreeBSD 9.2 1-2 weeks ago.
 
Now we hit the same problem. The ARC size is reduced to 32% (from 16 GB) and the wired amount of memory is rising.

Code:
last pid: 93908;  load averages:  1.18,  1.03,  0.93   up 15+14:56:21  14:02:01
4039 processes:5 running, 4034 sleeping
CPU:  1.9% user,  0.0% nice,  5.9% system,  0.4% interrupt, 91.8% idle
Mem: 4437M Active, 3141M Inact, 21G Wired, 3310M Buf, 3023M Free
ARC: 4928M Total, 2025M MFU, 209M MRU, 1747K Anon, 1429M Header, 1368M Other
Swap: 8192M Total, 8192M Free
 
AFAIK, the amount of ARC is included in the wired memory, if the latter is much bigger than the former then there is something else that is wiring memory.
 
ARC is included in Wired bin. On my system, which is running fine (but I don't use NFS), I see:

Code:
last pid: 24077;  load averages:  0.01,  0.03,  0.03    up 1+19:56:16  15:11:16
51 processes:  1 running, 50 sleeping
CPU:  0.0% user,  0.0% nice,  0.2% system,  0.0% interrupt, 99.8% idle
Mem: 281M Active, 280M Inact, 3186M Wired, 19M Cache, 417M Buf, 154M Free
ARC: 2553M Total, 437M MFU, 1908M MRU, 402K Anon, 41M Header, 166M Other
Swap: 16G Total, 21M Used, 16G Free

You can further see the ARC usage with vmstat -m | sort -rnb +1 -2:
Code:
      Type InUse MemUse HighUse Requests  Size(s)
      solaris 422133 2559377K       - 124842355  16,32,64,128,256,512,1024,2048,4096
      devbuf 17860 34679K       -    21498  16,32,64,128,256,512,1024,2048,4096
      sysctloid  4766   235K       -     4875  16,32,64,128
      acpica  3503   363K       -   308620  16,32,64,128,256,512,1024,2048
      temp  1857   939K       -   530210  16,32,64,128,256,512,1024,2048,4096
      bus  1274   106K       -     6514  16,32,64,128,256,512,1024
      entropy  1024    64K       -     1024  64

@User23: Where is your 21 GB of Wired memory going?

@belon_cfy: I'm still not convinced that you have a memory issue. Maybe you have a bad network card? (That's what happened to me when I thought I had ZFS memory issues.) When "NFS stops working" what else happens? Is the console active? Can you still ssh in? You're leaving out too many details.
 
Last edited by a moderator:
I've seen a lot of boxes as well with insane amounts of Wired memory. This is becoming a real nuisance. There's no way of finding out what is using it, and limiting it seems impossible.
 
We rebooted the server last night, because we can't wait until it stops working. General memory usage and ZFS ARC/L2ARC usage will be logged.

By the way, we can't be the only ones with this problem, right? :\
 
aupanner said:
ARC is included in Wired bin. On my system, which is running fine (but I don't use NFS), I see:

Code:
last pid: 24077;  load averages:  0.01,  0.03,  0.03    up 1+19:56:16  15:11:16
51 processes:  1 running, 50 sleeping
CPU:  0.0% user,  0.0% nice,  0.2% system,  0.0% interrupt, 99.8% idle
Mem: 281M Active, 280M Inact, 3186M Wired, 19M Cache, 417M Buf, 154M Free
ARC: 2553M Total, 437M MFU, 1908M MRU, 402K Anon, 41M Header, 166M Other
Swap: 16G Total, 21M Used, 16G Free

You can further see the ARC usage with vmstat -m | sort -rnb +1 -2:
Code:
      Type InUse MemUse HighUse Requests  Size(s)
      solaris 422133 2559377K       - 124842355  16,32,64,128,256,512,1024,2048,4096
      devbuf 17860 34679K       -    21498  16,32,64,128,256,512,1024,2048,4096
      sysctloid  4766   235K       -     4875  16,32,64,128
      acpica  3503   363K       -   308620  16,32,64,128,256,512,1024,2048
      temp  1857   939K       -   530210  16,32,64,128,256,512,1024,2048,4096
      bus  1274   106K       -     6514  16,32,64,128,256,512,1024
      entropy  1024    64K       -     1024  64

@User23: Where is your 21 GB of Wired memory going?

@belon_cfy: I'm still not convinced that you have a memory issue. Maybe you have a bad network card? (That's what happened to me when I thought I had ZFS memory issues.) When "NFS stops working" what else happens? Is the console active? Can you still ssh in? You're leaving out too many details.

I don't think is the network card problem because it run perfectly on FreeBSD 9.1 64 bit.

The server is using Intel(R) PRO/1000 Network Card.
 
Last edited by a moderator:
User23 said:
Now we hit the same problem. The ARC size is reduced to 32% (from 16 GB) and the wired amount of memory is rising.

Code:
last pid: 93908;  load averages:  1.18,  1.03,  0.93   up 15+14:56:21  14:02:01
4039 processes:5 running, 4034 sleeping
CPU:  1.9% user,  0.0% nice,  5.9% system,  0.4% interrupt, 91.8% idle
Mem: 4437M Active, 3141M Inact, 21G Wired, 3310M Buf, 3023M Free
ARC: 4928M Total, 2025M MFU, 209M MRU, 1747K Anon, 1429M Header, 1368M Other
Swap: 8192M Total, 8192M Free

Seems your problem is a little bit different because your system still have more than 3GB of free memory. Mine only have less than 400MB free.
 
belon_cfy said:
By the way, I have the following parameters in /boot/loader.conf on both servers.
Code:
vfs.zfs.arc_max="4G"
vfs.zfs.write_limit_override="2G"

Have you tried to leave ZFS alone? I mean taking out those lines.
 
Re: Memory leak on FreeBSD 9.2-RC3?

I guess maybe the problem is not due to memory leak, it could be that FreeBSD 9.2 need more memory reservation. Previously my ARC memory setting was 4 GB on an 8 GB server, on FreeBSD 9.2 I have to reduce it to 3 GB for ARC.
 
Re: Memory leak on FreeBSD 9.2-RC3?

After 30 days uptime we are 284 MB in swap.

Code:
FreeBSD 9.2-RELEASE #2 r255966: Wed Oct 23 15:15:02 CEST 2013
real memory  = 34368126976 (32776 MB)
avail memory = 33101123584 (31567 MB)

Code:
last pid:   835;  load averages:  1.03,  0.94,  0.89                              up 30+14:24:38  14:30:23
3674 processes:4 running, 3670 sleeping
CPU:     % user,     % nice,     % system,     % interrupt,     % idle
Mem: 3973M Active, 1867M Inact, 22G Wired, 73M Cache, 1678M Buf, 3314M Free
ARC: 3842M Total, 904M MFU, 204M MRU, 2988K Anon, 2389M Header, 621M Other
Swap: 8192M Total, 284M Used, 7908M Free, 3% Inuse

Code:
cat /boot/loader.conf 
geom_mirror_load="YES"

hint.p4tcc.0.disabled=1
hint.acpi_throttle.0.disabled=1

vfs.zfs.arc_max=16084379648

Code:
ARC Summary: (HEALTHY)
	Memory Throttle Count:			0

ARC Misc:
	Deleted:				386.89m
	Recycle Misses:				126.18m
	Mutex Misses:				616.97k
	Evict Skips:				11.74b

ARC Size:				26.67%	4.00	GiB
	Target Size: (Adaptive)		90.27%	13.52	GiB
	Min Size (Hard Limit):		12.50%	1.87	GiB
	Max Size (High Water):		8:1	14.98	GiB

ARC Size Breakdown:
	Recently Used Cache Size:	6.25%	865.42	MiB
	Frequently Used Cache Size:	93.75%	12.68	GiB

ARC Hash Breakdown:
	Elements Max:				10.22m
	Elements Current:		100.00%	10.22m
	Collisions:				250.00m
	Chain Max:				43
	Chains:					524.29k


------------------------------------------------------------------------

L2 ARC Summary: (HEALTHY)
	Passed Headroom:			36.15m
	Tried Lock Failures:			211.10m
	IO In Progress:				23.77k
	Low Memory Aborts:			10
	Free on Write:				34.55k
	Writes While Full:			1.03k
	R/W Clashes:				22.95k
	Bad Checksums:				0
	IO Errors:				0
	SPA Mismatch:				0

L2 ARC Size: (Adaptive)				54.98	GiB
	Header Size:			3.87%	2.13	GiB

L2 ARC Breakdown:				1.15b
	Hit Ratio:			36.40%	417.88m
	Miss Ratio:			63.60%	730.25m
	Feeds:					2.65m

L2 ARC Buffer:
	Bytes Scanned:				1.61	PiB
	Buffer Iterations:			2.65m
	List Iterations:			169.30m
	NULL List Iterations:			65.90m

L2 ARC Writes:
	Writes Sent:			100.00%	1.53m

------------------------------------------------------------------------

j12id4.jpg


The Legend is wrong. M= GB, k= MB, = KB

--

n1osid.jpg


Code:
cache                    -      -      -      -      -      -
  label/zfs-l2arc-0   326G   140G    151      6  2.05M   129K

Two ZFS FS on the pool, both only use ARC and L2ARC for metadata.

L2 ARC size: (adaptive) 54.98 GB but used are 326 GB.

Highest ARC size is 17.9 GB.
 
Re: Memory leak on FreeBSD 9.2-RC3?

Code:
vmstat -z
ITEM                   SIZE  LIMIT     USED     FREE      REQ FAIL SLEEP

UMA Kegs:               208,      0,     197,       7,     197,   0,   0
UMA Zones:             1920,      0,     197,       1,     197,   0,   0
UMA Slabs:              568,      0,  441357,  416920,2315056948,   0,   0
UMA RCntSlabs:          568,      0,   11356,     999,   21841,   0,   0
UMA Hash:               256,      0,      78,      12,      82,   0,   0
16 Bucket:              152,      0,      30,     170,     236,   0,   0
32 Bucket:              280,      0,      82,     142,     438,   9,   0
64 Bucket:              536,      0,      76,     134,     559,  63,   0
128 Bucket:            1048,      0,  200327,       1, 1325857,1533288,   0
VM OBJECT:              232,      0,  210460,  292036,2003250377,   0,   0
MAP:                    232,      0,       8,      24,       8,   0,   0
KMAP ENTRY:             120, 1063641,   12136,  107803,5700025192,   0,   0
MAP ENTRY:              120,      0,   94072,   22891,3566093800,   0,   0
fakepg:                 120,      0,       0,       0,       0,   0,   0
mt_zone:               4112,      0,     241,      30,     241,   0,   0
16:                      16,      0,   38800,  236720,3719369031,   0,   0
32:                      32,      0,20062421,   36882,845139691,   0,   0
64:                      64,      0,  197105,  613327,17608460107,   0,   0
128:                    128,      0, 3929928,  143615,5660712109,   0,   0
256:                    256,      0,   72280,   95210,4872208876,   0,   0
512:                    512,      0,  418623, 7236696,9770286950,   0,   0
1024:                  1024,      0,    3937,   17603,351724496,   0,   0
2048:                  2048,      0,   14128,    3990,5472111833,   0,   0
4096:                  4096,      0,   43407,  243255,775778352,   0,   0
Files:                   80,      0,   31908,    9087,1834062564,   0,   0
TURNSTILE:              136,      0,    6955,     325,    6955,   0,   0
rl_entry:                40,      0,    6191,     613,    6191,   0,   0
umtx pi:                 96,      0,       0,       0,       0,   0,   0
MAC labels:              40,      0,       0,       0,       0,   0,   0
PROC:                  1192,      0,    3500,    2995,53959992,   0,   0
THREAD:                1160,      0,    6774,     180,    6942,   0,   0
SLEEPQUEUE:              80,      0,    6955,     382,    6955,   0,   0
VMSPACE:                392,      0,    3478,    3152,53960041,   0,   0
cpuset:                  72,      0,     304,     396,     481,   0,   0
audit_record:           960,      0,       0,       0,       0,   0,   0
mbuf_packet:            256,      0,   10651,    3429,5109589283,   0,   0
mbuf:                   256,      0,    1193,    5577,17635390209,   0,   0
mbuf_cluster:          2048,  25600,   14139,    3993,1538345420,   0,   0
mbuf_jumbo_page:       4096,  12800,      56,    2234,468441063,   0,   0
mbuf_jumbo_9k:         9216,   6400,       0,       0,       0,   0,   0
mbuf_jumbo_16k:       16384,   3200,       0,       0,       0,   0,   0
mbuf_ext_refcnt:          4,      0,       0,       0,       0,   0,   0
g_bio:                  248,      0,      15,   11040,8993022264,   0,   0
ttyinq:                 160,      0,     195,     741,    7140,   0,   0
ttyoutq:                256,      0,     103,     572,    3702,   0,   0
cryptop:                 88,      0,       0,       0,       0,   0,   0
cryptodesc:              72,      0,       0,       0,       0,   0,   0
FPU_save_area:          832,      0,       0,       0,       0,   0,   0
VNODE:                  504,      0,  259171,  234269,3093667716,   0,   0
VNODEPOLL:              112,      0,    2378,    3166, 3521191,   0,   0
NAMEI:                 1024,      0,       0,    1640,6335195493,   0,   0
S VFS Cache:            108,      0,  141198,  317403,363833693,   0,   0
STS VFS Cache:          148,      0,       0,       0,       0,   0,   0
L VFS Cache:            328,      0,  106789,  175919,2640194372,   0,   0
LTS VFS Cache:          368,      0,       0,       0,       0,   0,   0
NFSMOUNT:               632,      0,       0,       0,       0,   0,   0
NFSNODE:                656,      0,       0,       0,       0,   0,   0
DIRHASH:               1024,      0,    1562,     858,   43297,   0,   0
Mountpoints:            824,      0,       7,      29,       7,   0,   0
pipe:                   728,      0,    3545,    2350,52359357,   0,   0
ksiginfo:               112,      0,    6485,    2524,  597417,   0,   0
itimer:                 344,      0,       1,      21,       1,   0,   0
pfsrctrpl:              152,  10000,       0,       0,       0,   0,   0
pfrulepl:               936,      0,      61,       7,      61,   0,   0
pfstatepl:              288,  10010,    3895,    5153,76419586, 103,   0
pfstatekeypl:           288,      0,    3895,    5452,76419586,   0,   0
pfstateitempl:          288,      0,    3895,    5140,76419586,   0,   0
pfaltqpl:               240,      0,       0,       0,       0,   0,   0
pfpooladdrpl:            88,      0,       0,       0,       0,   0,   0
pfrktable:             1296,   1002,       0,       0,       0,   0,   0
pfrkentry:              160, 200016,       0,       0,       0,   0,   0
pfrkcounters:            64,      0,       0,       0,       0,   0,   0
pffrent:                 32,   5050,       0,       0,       0,   0,   0
pffrag:                  80,      0,       0,       0,       0,   0,   0
pffrcache:               80,  10035,       0,       0,       0,   0,   0
pffrcent:                24,  50022,       0,       0,       0,   0,   0
pfstatescrub:            40,      0,       0,       0,       0,   0,   0
pfiaddrpl:              120,      0,       0,       0,       0,   0,   0
pfospfen:               112,      0,     710,      16,     710,   0,   0
pfosfp:                  40,      0,     420,      84,     420,   0,   0
KNOTE:                  128,      0,   23162,    9405,4939299207,   0,   0
socket:                 680,  25602,   11603,    4309,397310260,   0,   0
unpcb:                  240,  25600,    9628,    3940,314508191,   0,   0
ipq:                     56,    819,       0,     819,50742228, 798,   0
udp_inpcb:              392,  25600,      33,    2577,56270143,   0,   0
udpcb:                   16,  25704,      33,    2655,56270143,   0,   0
tcp_inpcb:              392,  25600,    2477,    2893,26531824,   0,   0
tcpcb:                  976,  25600,    1941,    1679,26531824,   0,   0
tcptw:                   72,   5150,     536,    2764,18854769,   0,   0
syncache:               152,  15375,       1,    2474,25897649,   0,   0
hostcache:              136,  15372,    1677,    1543,  331467,   0,   0
tcpreass:                40,   1680,       1,    1679, 3712750,   0,   0
sackhole:                32,      0,       0,    2020, 3076250,   0,   0
ripcb:                  392,  25600,       0,     200,      31,   0,   0
rtentry:                200,      0,      36,     154,      36,   0,   0
selfd:                   56,      0,   12925,    2384,299775314,   0,   0
SWAPMETA:               288, 4057911,    4364,     485, 4790032,   0,   0
FFS inode:              168,      0,   90122,  204238,38528604,   0,   0
FFS1 dinode:            128,      0,       0,       0,       0,   0,   0
FFS2 dinode:            256,      0,   90122,  184408,38154072,   0,   0
taskq_zone:              48,      0,       0,    2520, 3502406,   0,   0
space_seg_cache:         64,      0,  554621,   86971,230853231,   0,   0
zio_cache:              944,      0,      20,   50576,10934592656,   0,   0
zio_link_cache:          48,      0,      15,   54417,8258779147,   0,   0
zio_buf_512:            512,      0,       0,       0,       0,   0,   0
zio_data_buf_512:       512,      0,       0,       0,       0,   0,   0
zio_buf_1024:          1024,      0,       0,       0,       0,   0,   0
zio_data_buf_1024:     1024,      0,       0,       0,       0,   0,   0
zio_buf_1536:          1536,      0,       0,       0,       0,   0,   0
zio_data_buf_1536:     1536,      0,       0,       0,       0,   0,   0
zio_buf_2048:          2048,      0,       0,       0,       0,   0,   0
zio_data_buf_2048:     2048,      0,       0,       0,       0,   0,   0
zio_buf_2560:          2560,      0,       0,       0,       0,   0,   0
zio_data_buf_2560:     2560,      0,       0,       0,       0,   0,   0
zio_buf_3072:          3072,      0,       0,       0,       0,   0,   0
zio_data_buf_3072:     3072,      0,       0,       0,       0,   0,   0
zio_buf_3584:          3584,      0,       0,       0,       0,   0,   0
zio_data_buf_3584:     3584,      0,       0,       0,       0,   0,   0
zio_buf_4096:          4096,      0,       0,       0,       0,   0,   0
zio_data_buf_4096:     4096,      0,       0,       0,       0,   0,   0
zio_buf_5120:          5120,      0,       0,       0,       0,   0,   0
zio_data_buf_5120:     5120,      0,       0,       0,       0,   0,   0
zio_buf_6144:          6144,      0,       0,       0,       0,   0,   0
zio_data_buf_6144:     6144,      0,       0,       0,       0,   0,   0
zio_buf_7168:          7168,      0,       0,       0,       0,   0,   0
zio_data_buf_7168:     7168,      0,       0,       0,       0,   0,   0
zio_buf_8192:          8192,      0,       0,       0,       0,   0,   0
zio_data_buf_8192:     8192,      0,       0,       0,       0,   0,   0
zio_buf_10240:        10240,      0,       0,       0,       0,   0,   0
zio_data_buf_10240:   10240,      0,       0,       0,       0,   0,   0
zio_buf_12288:        12288,      0,       0,       0,       0,   0,   0
zio_data_buf_12288:   12288,      0,       0,       0,       0,   0,   0
zio_buf_14336:        14336,      0,       0,       0,       0,   0,   0
zio_data_buf_14336:   14336,      0,       0,       0,       0,   0,   0
zio_buf_16384:        16384,      0,       0,       0,       0,   0,   0
zio_data_buf_16384:   16384,      0,       0,       0,       0,   0,   0
zio_buf_20480:        20480,      0,       0,       0,       0,   0,   0
zio_data_buf_20480:   20480,      0,       0,       0,       0,   0,   0
zio_buf_24576:        24576,      0,       0,       0,       0,   0,   0
zio_data_buf_24576:   24576,      0,       0,       0,       0,   0,   0
zio_buf_28672:        28672,      0,       0,       0,       0,   0,   0
zio_data_buf_28672:   28672,      0,       0,       0,       0,   0,   0
zio_buf_32768:        32768,      0,       0,       0,       0,   0,   0
zio_data_buf_32768:   32768,      0,       0,       0,       0,   0,   0
zio_buf_36864:        36864,      0,       0,       0,       0,   0,   0
zio_data_buf_36864:   36864,      0,       0,       0,       0,   0,   0
zio_buf_40960:        40960,      0,       0,       0,       0,   0,   0
zio_data_buf_40960:   40960,      0,       0,       0,       0,   0,   0
zio_buf_45056:        45056,      0,       0,       0,       0,   0,   0
zio_data_buf_45056:   45056,      0,       0,       0,       0,   0,   0
zio_buf_49152:        49152,      0,       0,       0,       0,   0,   0
zio_data_buf_49152:   49152,      0,       0,       0,       0,   0,   0
zio_buf_53248:        53248,      0,       0,       0,       0,   0,   0
zio_data_buf_53248:   53248,      0,       0,       0,       0,   0,   0
zio_buf_57344:        57344,      0,       0,       0,       0,   0,   0
zio_data_buf_57344:   57344,      0,       0,       0,       0,   0,   0
zio_buf_61440:        61440,      0,       0,       0,       0,   0,   0
zio_data_buf_61440:   61440,      0,       0,       0,       0,   0,   0
zio_buf_65536:        65536,      0,       0,       0,       0,   0,   0
zio_data_buf_65536:   65536,      0,       0,       0,       0,   0,   0
zio_buf_69632:        69632,      0,       0,       0,       0,   0,   0
zio_data_buf_69632:   69632,      0,       0,       0,       0,   0,   0
zio_buf_73728:        73728,      0,       0,       0,       0,   0,   0
zio_data_buf_73728:   73728,      0,       0,       0,       0,   0,   0
zio_buf_77824:        77824,      0,       0,       0,       0,   0,   0
zio_data_buf_77824:   77824,      0,       0,       0,       0,   0,   0
zio_buf_81920:        81920,      0,       0,       0,       0,   0,   0
zio_data_buf_81920:   81920,      0,       0,       0,       0,   0,   0
zio_buf_86016:        86016,      0,       0,       0,       0,   0,   0
zio_data_buf_86016:   86016,      0,       0,       0,       0,   0,   0
zio_buf_90112:        90112,      0,       0,       0,       0,   0,   0
zio_data_buf_90112:   90112,      0,       0,       0,       0,   0,   0
zio_buf_94208:        94208,      0,       0,       0,       0,   0,   0
zio_data_buf_94208:   94208,      0,       0,       0,       0,   0,   0
zio_buf_98304:        98304,      0,       0,       0,       0,   0,   0
zio_data_buf_98304:   98304,      0,       0,       0,       0,   0,   0
zio_buf_102400:      102400,      0,       0,       0,       0,   0,   0
zio_data_buf_102400: 102400,      0,       0,       0,       0,   0,   0
zio_buf_106496:      106496,      0,       0,       0,       0,   0,   0
zio_data_buf_106496: 106496,      0,       0,       0,       0,   0,   0
zio_buf_110592:      110592,      0,       0,       0,       0,   0,   0
zio_data_buf_110592: 110592,      0,       0,       0,       0,   0,   0
zio_buf_114688:      114688,      0,       0,       0,       0,   0,   0
zio_data_buf_114688: 114688,      0,       0,       0,       0,   0,   0
zio_buf_118784:      118784,      0,       0,       0,       0,   0,   0
zio_data_buf_118784: 118784,      0,       0,       0,       0,   0,   0
zio_buf_122880:      122880,      0,       0,       0,       0,   0,   0
zio_data_buf_122880: 122880,      0,       0,       0,       0,   0,   0
zio_buf_126976:      126976,      0,       0,       0,       0,   0,   0
zio_data_buf_126976: 126976,      0,       0,       0,       0,   0,   0
zio_buf_131072:      131072,      0,       0,       0,       0,   0,   0
zio_data_buf_131072: 131072,      0,       0,       0,       0,   0,   0
sa_cache:                80,      0,  169005,   69990,3055121733,   0,   0
dnode_t:                856,      0,  421017, 7109495,2175170765,   0,   0
dmu_buf_impl_t:         224,      0,  472433, 7648756,3254849154,   0,   0
arc_buf_hdr_t:          216,      0,10225669,    1517,180970294,   0,   0
arc_buf_t:               72,      0,   83224,  558326,1190088679,   0,   0
zil_lwb_cache:          192,      0,       2,    3158, 6784799,   0,   0
zfs_znode_cache:        368,      0,  169006,   66414,3055121732,   0,   0
 
Re: Memory leak on FreeBSD 9.2-RC3?

Reserving more memory didn't help, it will eventually use up all the available memory as before. Bad things happen when all the memory in use such as NFS stop working.
 
Re: Memory leak on FreeBSD 9.2-RC3?

I just did a quick read (well, scroll trough with my eyes in grep mode) of the thread and did not find anything about processes or system logs. Could it be that the one server has a lot of zombie processes, or processes waiting for something? Are there more network connections than the other servers, or different loads? Is some client machine doing multicast?

The reason for these questions is that I recently heard some blokes debug a problem where it seemed that packets were coming trough the network and also by WiFi, leading to race conditions, duplicate packets and skewed real time data. Maybe you have problems with multiple network cards.
 
Re: Memory leak on FreeBSD 9.2-RC3?

Crivens said:
I just did a quick read (well, scroll trough with my eyes in grep mode) of the thread and did not find anything about processes or system logs. Could it be that the one server has a lot of zombie processes, or processes waiting for something? Are there more network connections than the other servers, or different loads? Is some client machine doing multicast?

The reason for these questions is that I recently heard some blokes debug a problem where it seemed that packets were coming trough the network and also by WiFi, leading to race conditions, duplicate packets and skewed real time data. Maybe you have problems with multiple network cards.

It just a NFS server and works fine on FreeBSD 9.1 until now after switched back from 9.2 with the same workload and content. I have tried to re-install in between 9.1 and 9.2 few times to confirm that there is the problem on FreeBSD 9.2. Can't find any zombie processes abusing the memory and CPU currently.

I'm using on board 2X Intel NIC, no wifi and additional network card been installed.

Trying FreeBSD 10 now, hopefully the problem won't happen again.
 
Re: Memory leak on FreeBSD 9.2-RC3?

Seems the FreeBSD 10 also having the same issue, I have no idea where is the memory goes to? Possibly due to the ZFS 5000 because the similar symptom never happened in FreeBSD 9.0 and FreeBSD 9.1 with the exactly same configuration and workload.

Is there any command to check which process is occupying the memory? I know free memory which means wasted memory, however, bad things happen when running out of it such as NFS and network stop working and completely no IO on disk.
 
Re: Memory leak on FreeBSD 9.2 and FreeBSD 10?

Matty said:
Did you try to set the vfs.zfs.zio.use_uma to 0?

Default is 0

Code:
# sysctl vfs.zfs.zio.use_uma
vfs.zfs.zio.use_uma: 0
 
Re: Memory leak on FreeBSD 9.2 and FreeBSD 10?

Applying the following illumos-gate.patch as below:
Code:
--- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c        Fri Jan 17 11:49:44 2014
+++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c        Fri Jan 17 11:49:44 2014
@@ -22,7 +22,7 @@
  * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
  * Copyright (c) 2012, Joyent, Inc. All rights reserved.
  * Copyright (c) 2013 by Delphix. All rights reserved.
- * Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+ * Copyright (c) 2014 by Saso Kiselkov. All rights reserved.
  * Copyright 2013 Nexenta Systems, Inc.  All rights reserved.
  */

@@ -4236,7 +4236,14 @@
         */
        for (ab = list_prev(buflist, head); ab; ab = ab_prev) {
                ab_prev = list_prev(buflist, ab);
+               abl2 = ab->b_l2hdr;

+               /*
+                * Release the temporary compressed buffer as soon as possible.
+                */
+               if (abl2->b_compress != ZIO_COMPRESS_OFF)
+                       l2arc_release_cdata_buf(ab);
+
                hash_lock = HDR_LOCK(ab);
                if (!mutex_tryenter(hash_lock)) {
                        /*
@@ -4248,14 +4255,6 @@
                        continue;
                }

-               abl2 = ab->b_l2hdr;
-
-               /*
-                * Release the temporary compressed buffer as soon as possible.
-                */
-               if (abl2->b_compress != ZIO_COMPRESS_OFF)
-                       l2arc_release_cdata_buf(ab);
-
                if (zio->io_error != 0) {
                        /*
                         * Error - drop L2ARC entry.

Code:
cd /usr/src
patch < illumos-gate.patch
make buildworld
make installworld

It seems applying the patch didn't help in the following thread.
http://lists.freebsd.org/pipermail/freebsd-current/2014-January/047706.html
 
Back
Top