Solved Why are there so many daemons working for the same service?

Last Sunday, I successfully migrated to a new NAS, as the disks were starting to fail.

The old NAS had 16GBs of RAM while the new one just had 8GBs. I knew that would not be enough as there are more than 25 active users. But the machine came with just 8GBs and I was told to wait for reinforcements.

Come Monday morning, I started to receive calls from some users, telling the "network drives" on their Windows machines worked too slowly. "There is not enough memory on the machine." was my initial thought, and reverted back to the old NAS.

I received the said upgrade during the week and completed another migration 10 minutes ago. All seems fine for now, but I will have to wait for Monday morning to be sure.

After the migration, I thought it would be a good idea to run htop, and realized there are many smbd daemons at work, each consuming 177MBs of memory!


Code:
    0[                                                                           0.0%]   4[                                                                           0.0%]
    1[                                                                           0.0%]   5[                                                                           0.0%]
    2[                                                                           0.0%]   6[                                                                           0.0%]
    3[                                                                           0.0%]   7[                                                                           0.0%]
  Mem[||||||||||||||||||||||||||||||||||||||||||                          2.51G/15.9G] Tasks: 40, 0 thr, 21 kthr; 2 running
  Swp[                                                                       0K/8.00G] Load average: 0.14 0.10 0.09
                                                                                       Uptime: 03:40:04

  [Main]
  PID USER       PRI  NI  VIRT   RES S  CPU% MEM%▽  TIME+  Command
 1932 mysql       20   0  839M  417M S   0.0  2.6  0:08.17 /usr/local/libexec/mysqld --defaults-extra-file=/usr/local/etc/mysql/my.cnf --basedir=/usr/local --datadir=/var/db
26840 root        20   0  178M  270M S   0.0  1.7  0:01.45 /usr/local/sbin/smbd --daemon --configfile=/usr/local/etc/smb4.conf
 2346 root        20   0  177M  269M S   0.0  1.6  0:00.17 /usr/local/sbin/smbd --daemon --configfile=/usr/local/etc/smb4.conf
13162 root        20   0  177M  267M S   0.0  1.6  0:00.03 /usr/local/sbin/smbd --daemon --configfile=/usr/local/etc/smb4.conf
62036 root        20   0  176M  266M S   0.0  1.6  0:00.02 /usr/local/sbin/smbd --daemon --configfile=/usr/local/etc/smb4.conf
19811 root        20   0  176M  266M S   0.0  1.6  0:00.02 /usr/local/sbin/smbd --daemon --configfile=/usr/local/etc/smb4.conf
18461 root        20   0  176M  266M S   0.0  1.6  0:00.05 /usr/local/sbin/smbd --daemon --configfile=/usr/local/etc/smb4.conf
18460 root        20   0  176M  266M S   0.0  1.6  0:00.03 /usr/local/sbin/smbd --daemon --configfile=/usr/local/etc/smb4.conf
 2306 root        20   0  176M  266M S   0.0  1.6  0:00.62 /usr/local/sbin/smbd --daemon --configfile=/usr/local/etc/smb4.conf
 2310 root        20   0  176M  266M S   0.0  1.6  0:00.00 /usr/local/sbin/smbd --daemon --configfile=/usr/local/etc/smb4.conf
 2308 root        20   0  136M  187M S   0.0  1.1  0:00.02 /usr/local/sbin/smbd --daemon --configfile=/usr/local/etc/smb4.conf
 2309 root        20   0  131M  187M S   0.0  1.1  0:00.00 /usr/local/sbin/smbd --daemon --configfile=/usr/local/etc/smb4.conf
 2357 root        20   0 55600 37172 S   0.0  0.2  0:01.04 /usr/local/bin/perl /usr/local/lib/webmin/miniserv.pl /usr/local/etc/webmin/miniserv.conf
 2301 root        20   0 41524 21440 S   0.0  0.1  0:00.44 /usr/local/sbin/nmbd --daemon --configfile=/usr/local/etc/smb4.conf
 1360 root        20   0 56100 20256 S   0.0  0.1  0:00.62 /usr/local/sbin/syslog-ng -f /usr/local/etc/syslog-ng.conf -p /var/run/syslog.pid
18475 root        20   0 25400 11632 S   0.0  0.1  0:00.14 sshd: root@pts/0
59902 root        47   0 22604 10668 S   0.0  0.1  0:00.00 sshd: /usr/sbin/sshd [listener] 0 of 10-100 startups
62060 nobody      20   0 20816 10116 S   0.0  0.1  0:00.01 proftpd: (accepting connections)
 1530 root        20   0 23412  7832 S   0.0  0.0  0:00.18 /usr/sbin/ntpd -p /var/db/ntp/ntpd.pid -c /etc/ntp.conf
 1359 root        68   0 23412  6900 S   0.0  0.0  0:00.00 /usr/local/sbin/syslog-ng -f /usr/local/etc/syslog-ng.conf -p /var/run/syslog.pid
18477 root        20   0 14768  4740 S   0.0  0.0  0:00.06 -bash
 1192 root        20   0 14404  4736 S   0.0  0.0  0:00.03 /sbin/devd
F1Help  F2Setup F3SearchF4FilterF5Tree  F6SortByF7Nice -F8Nice +F9Kill  F10Quit

Yes, there are some VMs running on the site, but no-one is accessing any files, at least not any that I know of. Is it normal for Samba to be so resource-hungry or am I doing something wrong? Does it have something to do with my Samba configuration?

Code:
aio write size = 16384
aio read size = 16384
aio write behind = true
 
RES measurements might be of greater interest.

In RES column in top(1) output (2013), Charles Swiger wrote:

… Memory that has been allocated but not written to is associated with the process address space in terms of accounting, but does not actually consume physical memory. There's also copy-on-write memory (used for the program executable code itself, which is also typically also marked read-only), mmap()ing big sparse files or device special files like a video framebuffer (ie, an X11 server), and probably a few other things which can reserve lots of resident memory without actually consuming physical memory.
 
SMB3 Multi-Channel maybe?
This is the first time I have heard of it. Google gave me this:

1708274756390.png


So if I'm not mistaken, it is the Samba server's ability to "use multiple network connections simultaneously." I'd say there are multiple paths between the server and clients as clients reside on VMWare ESXi-7.0U3n-21930508 which is configured to facilitate all 4 ports:
1708275153723.png


The NAS itself happens to have two em interfaces, configured as a lagg array:

Code:
ifconfig_lagg0="laggproto lacp laggport em0 laggport em1 172.16.8.4/24"

Should I worry?
 

Code:
prefork children    (G)

       This    option controls    the number of worker processes that are
       started for each service when prefork process model is enabled (see
       samba(8) -M)    The prefork children are only started for those
       services that support prefork (currently ldap, kdc and netlogon).
       For processes that don't support preforking all requests are
       handled by a    single process for that    service.

       This    should be set to a small multiple of the number    of CPU's
       available on    the server

       Additionally    the number of prefork children can be specified    for an
       individual service by using "prefork    children: service name"    i.e.
       "prefork children:ldap = 8" to set the number of ldap worker
       processes.

       Default: prefork children = 4
 

Code:
prefork children    (G)

       This    option controls    the number of worker processes that are
       started for each service when prefork process model is enabled (see
       samba(8) -M)    The prefork children are only started for those
       services that support prefork (currently ldap, kdc and netlogon).
       For processes that don't support preforking all requests are
       handled by a    single process for that    service.

       This    should be set to a small multiple of the number    of CPU's
       available on    the server

       Additionally    the number of prefork children can be specified    for an
       individual service by using "prefork    children: service name"    i.e.
       "prefork children:ldap = 8" to set the number of ldap worker
       processes.

       Default: prefork children = 4
I see. Thanks.
 
Back
Top