Is wired memory usage supposed to be very high?

olav

Well-Known Member

Reaction score: 28
Messages: 375

top says this:

Code:
last pid: 59459;  load averages:  0.00,  0.00,  0.00              up 32+12:06:27  20:04:27
37 processes:  1 running, 36 sleeping
CPU:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
Mem: 28M Active, 480M Inact, 4978M Wired, 120M Cache, 623M Buf, 319M Free
Swap: 4096M Total, 9100K Used, 4087M Free

Wired is almost 5GB, is this a memory leak? Running FreeBSD 8.1 release

Currently writing to the ZFS pool is also very slow, normally I get like 70-90MB/s, now I get like 20MB/s
 

phoenix

Administrator
Staff member
Administrator
Moderator

Reaction score: 1,289
Messages: 4,099

And, if you look down the screen, you'll see all the apps running and their memory usage. Are there any using lots of memory?
 

Matty

Active Member

Reaction score: 12
Messages: 183

ZFS/Arc cache maybe? what's you max on arc cache in boot/loader.conf?
 
OP
olav

olav

Well-Known Member

Reaction score: 28
Messages: 375

There are no applications using a lot of memory

sorted by size
Code:
last pid: 66459;  load averages:  0.00,  0.00,  0.00                                                         up 33+01:02:57  09:00:57
37 processes:  1 running, 36 sleeping
CPU:  0.0% user,  0.0% nice,  0.4% system,  0.4% interrupt, 99.2% idle
Mem: 12M Active, 117M Inact, 5156M Wired, 1836K Cache, 623M Buf, 638M Free
Swap: 4096M Total, 13M Used, 4083M Free

  PID USERNAME    THR PRI NICE   SIZE    RES STATE   C   TIME   WCPU COMMAND
 1051 pgsql         1  44    0 54092K  1140K select  1   0:38  0.00% postgres
 1047 pgsql         1  44    0 54092K  1056K select  1   0:22  0.00% postgres
 1049 pgsql         1  44    0 54092K   968K select  0   1:39  0.00% postgres
 1050 pgsql         1  44    0 54092K   928K select  1   1:04  0.00% postgres
60241 root          1  44    0 46264K  3228K select  0   5:36  0.00% smbd
60212 root          1  44    0 46024K  1928K select  0   0:00  0.00% smbd
60218 root          1  76    0 46024K  1924K select  1   0:00  0.00% smbd
66424 olav          1  44    0 38140K  4808K select  0   0:00  0.00% sshd
66422 root          1  45    0 38140K  4752K sbwait  0   0:00  0.00% sshd
60223 root          1  44    0 34932K  1292K select  0   0:00  0.00% winbindd
60219 root          1  44    0 34844K  1436K select  1   0:00  0.00% winbindd
60224 root          1  44    0 34844K  1228K select  0   0:00  0.00% winbindd
60220 root          1  44    0 34840K  1480K select  0   0:00  0.00% winbindd
60204 root          1  44    0 32772K  1652K select  1   0:01  0.00% nmbd
 1115 root          1  44    0 26172K   780K select  0   0:00  0.00% sshd
 1053 pgsql         1  44    0 22796K  1028K select  1   0:36  0.00% postgres
 1052 pgsql         1  44    0 22796K   936K select  0   0:18  0.00% postgres
 1198 root          1  44   -4 21676K   624K ttyin   0   0:00  0.00% login
  888 root          1  44    0 15592K   936K nanslp  1   0:01  0.00% smartd
  881 root          1  76    0 14084K   372K accept  0   0:00  0.00% vsftpd
 1123 root          1  44    0 12140K  1280K select  1   0:23  0.00% sendmail
 1127 smmsp         1  44    0 12140K   708K pause   0   0:00  0.00% sendmail
66425 olav          1  45    0 10256K  2568K wait    1   0:00  0.00% bash
66427 olav          1  44    0  9372K  2136K CPU1    1   0:00  0.00% top
  749 root          1  44    0  7992K   496K select  0   0:02  0.00% rpcbind
 1134 root          1  76    0  7988K   384K nanslp  1   0:05  0.00% cron
  727 root          1  44    0  7060K   600K select  0   0:04  0.00% syslogd
 1204 root          1  76    0  6928K   316K ttyin   0   0:00  0.00% getty
 1200 root          1  76    0  6928K   316K ttyin   1   0:00  0.00% getty
 1205 root          1  76    0  6928K   316K ttyin   1   0:00  0.00% getty
 1202 root          1  76    0  6928K   316K ttyin   0   0:00  0.00% getty
 1199 root          1  76    0  6928K   316K ttyin   1   0:00  0.00% getty
 1203 root          1  76    0  6928K   316K ttyin   0   0:00  0.00% getty
 1201 root          1  76    0  6928K   316K ttyin   1   0:00  0.00% getty
 1034 root          1  76    0  5968K   300K select  1   0:00  0.00% rsync

ZFS/Arc cache maybe? what's you max on arc cache in boot/loader.conf?
There is no max setting, I'm letting FreeBSD tune this automaticly as I'm running the amd64 bit version.

Oh, by the way. The ZFS pool seems to perform normal now. I get full write speed again, and I've not done anything.
 

User23

Well-Known Member

Reaction score: 68
Messages: 496

If you use ZFS then it is normal. On a NFS Fileserver with 32GB RAM i saw up to 21GB wired memory. Without any ZFS tuning this seems to be the maximum because 7-8 GB memory remain as free.
 
Top