Jail process forked

Hello,

I noticed a strange behavior on FreeBSD 9.1. When I run a jail, it seems some processes are forked:
Code:
ps aux
USER     PID %CPU %MEM   VSZ  RSS TT  STAT STARTED    TIME COMMAND
root    3465  0.0  0.0 20376 3460 ??  SsJ   8:20AM 0:00.04 sendmail: accepting connections (sendmail)
smmsp   3468  0.0  0.0 20376 3376 ??  IsJ   8:20AM 0:00.00 sendmail: Queue runner@00:30:00 for /var/spool/clientmqueue (sendmail)
root    3472  0.0  0.0 14128 1440 ??  IsJ   8:20AM 0:00.01 /usr/sbin/cron -s
root    3547  0.0  0.0 12052 1496 ??  SsJ   8:20AM 0:00.01 /usr/sbin/syslogd -s
root    3595  0.0  0.0 46744 3536 ??  IsJ   8:20AM 0:00.00 /usr/sbin/sshd
smmsp   3748  0.0  0.0 20380 3420 ??  IsJ   8:22AM 0:00.00 sendmail: Queue runner@00:30:00 for /var/spool/clientmqueue (sendmail)
root    3752  0.0  0.0 14128 1480 ??  IsJ   8:22AM 0:00.01 /usr/sbin/cron -s
cron jobs are executed two times. Same thing for sendmail.

Another issue, I run a Left4dead2 server in a jail with the Linux emulation layer. The server is run five or six times at startup with some <defunct> zombie process and some remain in "futex" state. I have to kill them by hand but they come back at each server restart.

I have made a little sample code to test if fork() works well:
Code:
int main(int argc, char ** argv)
{
  int pid = fork();

  if(pid == 0)
    {
      printf("forked children!\n");
    }
  else
    {
      printf("parent!\n");
    }

  while(1)
    sleep(1);
}
The code run fine and only two process are running.

So, have I missed something with the jail?
Code:
uname -a
FreeBSD nsXXXXX.XXX.net 9.1-RELEASE FreeBSD 9.1-RELEASE #0: Thu Mar 14 22:45:06 CET 2013     root@nsXXXXXX.XXX.net:/usr/obj/usr/src/sys/PGMKERN  amd64
The server runs on a dedicated OVH server but all the kernel and userland have been recompiled and installed on a clean ZFS partition. Maybe a kernel option can cause this issue?

Thanks.
 
It could simply be two distinct jails. Maybe you only have one and it didn't shutdown cleanly before you restarted it?
 
Yes I use VIMAGE with epair for jails. The ps aux command was taken from inside a jail. I have about ten jails and each has two cron and two sendmail. They have been shutdowned because the server rebooted just before I take screenshot.
 
This was my experience with VIMAGE jails too, back in 2010/2011. It would run the rc() script twice; Once to get the jail running, then assign the epair device, then again to make networking work properly. This always resulted in two crons being run at all times.

I didn't figure out a solution to the problem back then, and moved to a ESXi-based solution for virtualization instead. Things may have changed since then though; maybe poke the mailing lists?
 
Ok, thanks. I will look for a solution and certainly will poke the mailing list.

So, I guess my other issue with multiple forks of Left4Dead2 server is not related. I have other servers of other games using the Linux emulation layer without any bugs. It seems the Left4Dead2 server recreates processes at a random time after killing them. Are there any Linux emulation implementations which could lead to this behavior? (I read about the futex bug in older version of FreeBSD, but I guess it's fixed).

Here is a part of the top command about Left4dead2 server:
Code:
  PID USERNAME    THR PRI NICE   SIZE    RES STATE   C   TIME   WCPU COMMAND
 9099 xxx        1  26    0   531M   209M nanslp  6  21:58  7.96% srcds_linux
 9107 xxx        1  20    0   531M   209M select  1   0:43  0.00% srcds_linux
 9100 xxx        1  20    0   531M   209M futex   4   0:05  0.00% srcds_linux
 9117 xxx        1  20    0   531M   209M futex   6   0:04  0.00% srcds_linux
 9184 xxx        1  20    0   531M   209M futex   5   0:04  0.00% srcds_linux
 ...
 
blaize said:
Hello,

I noticed a strange behavior on FreeBSD 9.1. When I run a jail, it seem some process are forked:
Can you post the jail.conf entry and the command you start the jail with? You can also use -o ppid with ps to see what the parent processes are.
blaize said:
The server is run five or six times at startup with some <defunct> zombie process and some remain in "futex" state.
No point killing a zombie. It's already dead. The PID sticks around to preserve the process-exit information, since nothing has called wait(2).

Kevin Barry
 
ta0kira said:
Can you post the jail.conf entry and the command you start the jail with? You can also use -o ppid with ps to see what the parent processes are.

Here is the ps with PPID. All are parented with init.
Code:
 ps auxo ppid
USER     PID %CPU %MEM    VSZ   RSS TT  STAT STARTED    TIME COMMAND           PPID
root    3031  0.0  0.0  20376  3600 ??  SsJ   8:19AM 0:00.82 sendmail: accept     1
smmsp   3034  0.0  0.0  20376  3376 ??  IsJ   8:19AM 0:00.01 sendmail: Queue      1
root    3038  0.0  0.0  14128  1440 ??  IsJ   8:19AM 0:00.13 /usr/sbin/cron -     1
root    3115  0.0  0.0  12052  1500 ??  SsJ   8:19AM 0:00.10 /usr/sbin/syslog     1
root    3164  0.0  0.0  46744  3556 ??  IsJ   8:19AM 0:00.00 /usr/sbin/sshd       1
smmsp   3190  0.0  0.0  20380  3440 ??  IsJ   8:20AM 0:00.01 sendmail: Queue      1
root    3194  0.0  0.0  14128  1476 ??  IsJ   8:20AM 0:00.12 /usr/sbin/cron -     1

The jail.conf is too big to post all, I just post one jail configuration:

Code:
#
# Jails networking
#
cloned_interfaces="bridge0"
#ifconfig_bridge0="addm em0"
gateway_enable="YES"
ipv6_gateway_enable="YES"

#
# Jails configuration
#
jail_enable="YES"
jail_v2_enable="YES"
jail_set_hostname_allow="NO"

#jail_left4dead
jail_left4dead_name="left4dead"
jail_left4dead_rootdir="/jail/left4dead"
jail_left4dead_hostname="left4deadbot.localnet"
jail_left4dead_ip="192.168.1.3"
jail_left4dead_devfs_enable="YES"
jail_left4dead_devfs_ruleset="devfsrules_jail"
jail_left4dead_mount_enable="YES"
jail_left4dead_fdescfs_enable="NO"
jail_left4dead_procfs_enable="NO"
jail_left4dead_fstab="/etc/fstab.left4dead"
jail_left4dead_vnet_enable="YES"
jail_left4dead_exec_prestart0="ifconfig epair3 create"
jail_left4dead_exec_prestart1="ifconfig bridge0 addm epair3a"
jail_left4dead_exec_prestart2="ifconfig epair3a up"
jail_left4dead_exec_earlypoststart0="ifconfig epair3b vnet left4dead"
jail_left4dead_exec_afterstart0="ifconfig lo0 127.0.0.1"
jail_left4dead_exec_afterstart1="ifconfig epair3b 192.168.1.3 netmask 255.255.255.0 up"
jail_left4dead_exec_afterstart2="route add default 192.168.1.254"
jail_left4dead_exec_afterstart3="/bin/sh /etc/rc"
jail_left4dead_exec_poststop0="ifconfig epair3a down"
jail_left4dead_exec_poststop1="ifconfig bridge0 deletem epair3a"
jail_left4dead_exec_poststop2="ifconfig epair3a destroy"

I start the jail with this piece of script code:
Code:
elif test $CMD == "start"
then
	pfctl -d
	/etc/rc.d/jail start $JAILNAME
	pfctl -e -f /etc/pf.conf

I shutdown pf during start because I experienced 100% chance to get kernel panic when creating and moving epair with pf enabled.
 
Code:
jail_left4dead_exec_earlypoststart0="ifconfig epair3b vnet left4dead"
This was new to me. :) Thank you.

Code:
jail_left4dead_exec_afterstart0="ifconfig lo0 127.0.0.1"
jail_left4dead_exec_afterstart1="ifconfig epair3b 192.168.1.3 netmask 255.255.255.0 up"
jail_left4dead_exec_afterstart2="route add default 192.168.1.254"
You might want to configure this in the jails /etc/rc.conf instead:
Code:
ifconfig_lo0="inet 127.0.0.1/24" # Not sure if this is even needed
ifconfig_epair3b="inet 192.168.1.3 netmask 255.255.255.0 up"
defaultrouter="192.168.1.254"

Try to also drop this, just to test:
Code:
jail_left4dead_exec_afterstart3="/bin/sh /etc/rc"
 
Hello, sorry about the time to respond, I was a bit busy on other things.

Anyway, we migrated our server to a CentOS with KVM. So my problem is solved because I now use FreeBSD in virtual machine only with services that don't need the Linux layer and it works as expected. Games servers binary runs fine in Ubuntu Server.

I didn't have time to test to remove the line:
Code:
jail_left4dead_exec_afterstart3="/bin/sh /etc/rc"
But I think this should resolve the daemon that was run twice.

Thanks for helping.
 
Hello,

I've run L4D2 server on FreeBSD 8.x (since 2010) and 9.1 (now), without using a jail. I've always seen six or more srcds_linux processes. Too bad you switched to CentOS.
 
Back
Top