Solved systemd alternative on FreeBSD

I am currently using systemd for managing web applications (mainly written in GoLang and Ruby) as services on Linux (Ubuntu 16.04 and CentOS 7) and I am trying to port my web server configuration over to FreeBSD. My code runs fine on FreeBSD but I am not sure of how to manage the web application processes. Mainly I want the following:
  • Web applications to start/stop with the OS;
  • Web applications to be manageable as services (like I want to be able to do service mywebapplication restart);
  • And have the web applications automatically restart on crashes.
I am currently running systemd configuration files (unit files) like the ones described here: https://github.com/puma/puma/blob/master/docs/systemd.md. What am I supposed to use for managing services on FreeBSD?
 
Last edited:

SirDice

Administrator
Staff member
Administrator
Moderator
What am I supposed to use for managing services on FreeBSD?
Plain "old" rc(8) style scripts controlled from rc.conf(5). Typically started and stopped using service(8).

And have the web applications automatically restart on crashes
Not happening with the "standard" scripts unfortunately. But there are several solutions for that. The de facto standard option is the venerable sysutils/daemontools. But there's also sysutils/fsc which is a bit like Solaris' counterpart. I think there are a few others too but can't recall their names.
 
I am definitely looking into sysutils/daemontools which is also available officially through Ubuntu's package manager (although I will probably stick with my current systemd unit files for production on Ubuntu since they already work pretty well (reliable, with no issues)).
 
SirDice answer about rc is what we've been using for 13 years and have never had an issue with the very few instances of power loss. The Go application we had ran in the server, nginx and Apache so, again, those restarted using the standard, easy to use FreeBSD tools.
 
I am planning on writing the "plain old rc style scripts" so I can manage my web applications as system services (this is what I am used to doing; I am also making sure that my applications do not run as root). Then I am going to use sysutils/monit to monitor the uptime of my applications (and to restart them if they crash).
 
This is a sample of an RC style script that I am currently planning on using for my web applications (in particular, a Ruby on Rails application):

Code:
#!/bin/sh
# PROVIDE: myapp
# REQUIRE: LOGIN postgresql
. /etc/rc.subr
name="myapp"
rcvar=`set_rcvar`
stop_cmd="myapp_stop"
pidfile=/home/my_app_user/my_app_user/Myapp/tmp/pids/server.pid
myapp_user="my_app_user"
myapp_chdir=/home/my_app_user/my_app_user/Myapp
command=/home/my_app_user/.rbenv/shims/ruby
command_args="-S bundle exec rails s -d -p 3000 -e production -b 0.0.0.0"
load_rc_config $name
myapp_stop()
{
    pids=`cat /home/my_app_user/my_app_user/Myapp/tmp/pids/server.pid`
    /bin/kill ${pids}
    wait_for_pids $pids
}
run_rc_command "$1"
 

wblock@

Developer
TrueOS is having good success with OpenRC. Joe Maloney gave a presentation about it at KnoxBUG, and I suspect he'll be at BSDCan. It's mostly compatible with the current rc system, but scripts are mostly simpler and it can be a lot simpler while still providing more features and speed.
 
SirDice answer about rc is what we've been using for 13 years and have never had an issue with the very few instances of power loss. The Go application we had ran in the server, nginx and Apache so, again, those restarted using the standard, easy to use FreeBSD tools.

When I was referring to automatic restarts, I was not referring to an application restarting whenever the server restarts, but with the application being restarted after the application itself crashes. For example, if you check the status of a service (configured via RC) and then get its PID, and then run the command kill -9 PID, there is no way (as far as I know) that you can have application be restarted automatically through RC alone.
 
When I was referring to automatic restarts, I was not referring the to an application restarting whenever the server restarts, but with the application being restarted after the application itself crashes.

Services should not crash. If they do then there is a reason for it. A proactive sysadmin will investigate why did an application crash instead of relying on the system to restart it. After so many years managing FreeBSD systems, I almost never had a service crash without a valid reason. So, restarting the service would not help unless I fixed the problem first. I had some issues with Linux but it involved systems that had not been installed and configured by me.
 
Services should not crash. If they do then there is a reason for it. A proactive sysadmin will investigate why did an application crash instead of relying on the system to restart it. After so many years managing FreeBSD systems, I almost never had a service crash without a valid reason. So, restarting the service would not help unless I fixed the problem first. I had some issues with Linux but it involved systems that had not been installed and configured by me.
The main reason I want automatic restarts is to avoid downtime in the event of an application crash. I haven't really seen an application crash before but I mainly want this for peace of mind as downtime is really bad.

Also do not rule out the possibility of applications crashing or behaving improperly (even if they never crash), you always have to check your logs (like checking for 502's when running app using NGINX as a reverse proxy). Like for example, certain servers dynamically spawn more instances of your application to handle more requests, so they might be failing to spawn more instances during peak usage times (therefore losing connections; I have seen this occur once due to a lack of available memory) despite the fact that they appear to be running smoothly. Additionally, there may be times where the runtime that your application is built upon has a bug that might cause it to crash after a running for a while. You may not even know about it unless you actively involved in the development community of the programming language (https://github.com/golang/go/issues/15658).

It does not matter how well you configure your system, there will always be reasons outside of your control that may cause your application to crash. Also there is no reason not to have automatic restarts, a proactive sysadmin would be checking the logs of their services anyways (for example, on systemd you could check the logs to see if the application restarted). Additionally there are even systems designed around the concept of "crash-only" (https://en.wikipedia.org/wiki/Crash-only_software).
 
The main reason I want automatic restarts is to avoid downtime in the event of an application crash. I haven't really seen an application crash before but I mainly want this for peace of mind as downtime is really bad.

Point taken and understood.

Also do not rule out the possibility of applications crashing or behaving improperly (even if they never crash), you always have to check your logs (like checking for 502's when running app using NGINX as a reverse proxy). Like for example, certain servers dynamically spawn more instances of your application to handle more requests, so they might be failing to spawn more instances during peak usage times (therefore losing connections; I have seen this occur once due to a lack of available memory) despite the fact that they appear to be running smoothly. Additionally, there may be times where the runtime that your application is built upon has a bug that might cause it to crash after a running for a while. You may not even know about it unless you actively involved in the development community of the programming language (https://github.com/golang/go/issues/15658).

Unfortunately, such a behaviour can not be monitored. Sytemd will restart a service if it is down. You might have a poorly written application that consumes your memory and eventually the service will crash. I had a developer who had written such a bad database query that eventually exhausted a dedicated mysql server running CentOS Linux and kicked the load of the FreeBSD web server. I ended up logging all queries in the DB just to prove that his coding was bad.

It does not matter how well you configure your system, there will always be reasons outside of your control that may cause your application to crash. Also there is no reason not to have automatic restarts, a proactive sysadmin would be checking the logs of their services anyways (for example, on systemd you could check the logs to see if the application restarted). Additionally there are even systems designed around the concept of "crash-only" (https://en.wikipedia.org/wiki/Crash-only_software).

Good coding, proper capacity planning and well build servers is the solution. systemd is not.
 
Good coding, proper capacity planning and well build servers is the solution. systemd is not.

I am not advocating systemd, I was just using it as an example since I have had experience with it in a production environment (besides my recent experience with RC, it is the only init system that I have any familiarity with TBH). I believe that in some instances, having applications restart automatically once they crash is reasonable since it helps minimize downtime. You cannot catch all problems before launching applications into production and there is always a chance of an application crashing in a live production environment. Are you really OK with your services being down potentially for hours/days just to avoid having them restart automatically as a matter of principle?

As I stated earlier in the thread, I am planning on using sysutils/monit for monitoring and for automatically restarting applications automatically if they happen to crash.
 
Are you really OK with your services being down potentially for hours/days just to avoid having them restart automatically as a matter of principle?
That would never happen. I always use SMS alerting on critical services. That said, I certainly can see your point like I mentioned above. I am just really against systemd but maybe this is a different discussion.
Just as a friendly advice. Do not rely on something that will restart a service automatically. You always need to know if something like that happened and investigate the reason.

Let me bring you another example. It is a bit off topic but. I like cars and motorbikes. I believe that ABS and ESP are really essential for the average driver. However, I oppose the use of other technologies such as lane keeping assist system, etc. A driver who can not keep their lane should not drive in the first place. I hope that you get my point ;)
 
Might be interesting to note that Solaris/SmartOS has SMF which can be configured to automatically restart services. If memory recalls if a service dies it will be restarted, but should it happen three times (presumably within some time frame) then SMF gives up on it.

It was advertised to me that service administration should occur only through the SMF commands ( svcadm, svcs, and svccfg). If you start kill(1)ing services, or crashing them in other ways, then the management system will assume something has been done maliciously/by accident, and will restart the service.

SMF is an entire framework dedicated to service management and ties in with the Solaris Fault Manager, so the tooling to monitor and inspect when things go wrong is quite good.
I've got little experience to know how systemd or others to compare
 
Plain "old" rc(8) style scripts controlled from rc.conf(5). Typically started and stopped using service(8).

Not happening with the "standard" scripts unfortunately. But there are several solutions for that. The de facto standard option is the venerable sysutils/daemontools. But there's also sysutils/fsc which is a bit like Solaris' counterpart. I think there are a few others too but can't recall their names.

Is there a way to see all the status of the services being runned? The only thing I like about systemd is " systemctl status <service-name>" implementation, I hope some day we can implement some sort of real time status via the command line that shows all the useful things that are running and many specific details which all follows the UNIX philosophy.
 
TBH I sometimes also miss some equivalent to OpenBSDs rcctl ls failed for a quick check if everything is fine service-wise when troubleshooting a host/service.

Some time ago I cobbled together a _very_ simple script to at least get a summary of running services:

Code:
for i in `service -e | sed 's@.*/@@'`; do
PID=`pgrep $i | tr '\n' ';'`
if [ "$PID" ]; then
echo "$i is running with PID(s) $PID"
fi
done

This won't find "failed" services, but usually you know what should be running on a particular system and can act accordingly.
The problem with service <name> status is, that only some services actually support the 'status' directive. So just issueing a 'service N status' to everything thats enabled will produce 80% garbage.

OTOH: on a larger scale you usually have some kind of monitoring in place that will tell you if a service has failed.
 
Top