jails jailed services: what's the standard practice? (noob questions :-) )

In MWLucas' Mastering Jails, Chapter 2 Jail Essentials, page 45... Lucas talks about final jail configuration. He mentions that after creating a bare jail, these things should be configured:

* jail root password
* jail has no users
* jail has no resolver, time zone, etc.

My question is: what is the standard practice for jails w/ respect to jail root account and jail user accounts? I'm not providing access to users, I'm installing services into my jails much like I'd setup a service stack in docker, e.g.:

Host
+ web server
+ db server
+ load balancer

Q: do you typically set a password for your jail root account?
Q: do you typically create jail user accounts and run your jail services under those accounts?
Q: do you set passwords for those jailed user/service accounts?
Q: do you typically standup ssh for those jailed user/service accounts?

For instance, I installed both caddy and postgres. Both of these packages created a an account to run their respective services under.

One of the reasons I ask about this is... when you jexec from the host, you don't need those passwords. So is assigning passwords to the jailed user accounts just for ceremony, or is it actually providing a security benefit?

Note: I typically have all my services living in an RFC1918 network and configure pf to port forward traffic to my jails.

----

A different but related question about jail services:

Q: what do you use as a template for creating your service scripts?
Q: if a service crashes, will freebsd restart it? does freebsd employ health checks on services?
Q: I guess the real question here is, how do you keep your services up?

Thanks in advance for any advice.
 
Q: do you typically set a password for your jail root account?
I don't think I have ever set a root password in a jail - I either access it via iocell console or if I *really* need to access it remotely via ssh, I just copy a known_hosts file containing my ssh fingerprint to the jail.
Only on physical hosts root has a password set to log in locally (i.e. usually an ipmi console) in case of fire (and ssh always either prohibits root login entirely or is set to without-password).

Q: do you typically create jail user accounts and run your jail services under those accounts?
Since pretty much all daemons use a distinct pre-existing user and group (e.g. www) to run and/or their user is created during installation of the port/pkg, I also rarely manually create any users in a jail.

Q: do you set passwords for those jailed user/service accounts?
No. Why should a daemon user have a password? It cannot 'log in' by default anyways, so this would be pointless.

Q: do you typically standup ssh for those jailed user/service accounts?
No. Only jails I *really* have/want to access directly have ssh enabled. Usually I access jails exclusively from the host.

Q: what do you use as a template for creating your service scripts?
If by "service scripts" you are talking about rc-scripts: why? I haven't encountered any service/daemon from ports/packages that doesn't come with a rc script (AFAIK it is even a hard requirement). For stuff I manually install, there's a rc-script template in the handbook or I just copy an existing rc file of a very basic deamon (which usually is pretty much the template from the documentation)
And if with "service scripts" you are referring to that systemd-garbage - there is no such crap on FreeBSD.

Q: if a service crashes, will freebsd restart it? does freebsd employ health checks on services?
see daemon(8) -r

Q: I guess the real question here is, how do you keep your services up?
They just run. If they don't: there is an urgent reason.
If a service crashes for whatever reason, monitoring will tell me and I will investigate *why* it crashed before it gets restarted *manually*. I definitely don't want a service to just get restarted automatically over and over again. Especially for services exposed to the internet you are obligated to check why it crashed and fix the reason before blindly restarting the service. Yes, there might be a downtime involved - but I couldn't care less if some Karen From Accounting couldn't access their mail for half an hour, if in turn the mailserver didn't get compromised by some high-volume bruteforce attack that caused the imap server to crash *and stay down until a human could check the logs*.
I know in docker-world this "if one service crashes, we just start 2 new instances"-mentality has become the standard, but it is a) stupid and b) dangerous. So just don't do it.


regarding service jails: I like the concept as far as I understood them from a brief glance - but I never found the time to actually investigate and thest that concept; so "proper" jails it is for now.
 
I run each hardware host as primary that only runs core network and jails. Then it will have a bunch of jails doing things like https or lb or dns. When I have to port old systems over, they get their own misc jail that has all the old junk until it can be properly moved. Most of these have a root account but many don't need it and it is purely for getting others to admin using older procedures. I can always jexec from the primary to fix things when needed but sometimes the service admins won't have access to the primary. Anything that allows remote ssh access requires a key and a password (different from a password on the key). This keeps the hackerbots out of the system when they find someones ssh private keys as a single letter password breaks the scripts for now. Note you can have a uid0 that is called something else in a jail (see user toor in /etc/passwd as an example) for when it is needed and in the past, I've created seuid programs that just run the /usr/local/etc/rc.d scripts so the admins don't need to be root to run stuff. The MAC/mdo features can be used but like the similar features in Solaris and AIX before them, they aren't commonly used properly so there isn't much expertise in proper configuration. The root users (whatever they are called) can never log in remotely. People must login so I have an accounting record and then they can elevate privileges. I run the Basic Security Module (BSM) on all the primaries which does collect that is going on inside the jails.
 
Thanks everyone!

Zare Service: Jails look cool, but I want more isolation around my processes. I'll stick with clones of a full installation for now.

sko Great points all of them. For my situation, I'm running code that I wrote as a service. So mostly I'm trying to understand whether I should create user accounts for the service to run as within the jail. And System-D garbage is one of many of the reasons I'm leaving Linux and ramping up on FreeBSD. Do you have any suggestions for how to create an account that will run as a daemon?

thogard I like your points, too. However, I don't know enough yet to use setuid without screwing things up. BSM looks great. I'll add that to my list of things to learn.
 
Here is what I do.

I maintain a jail template that is never directly used as a jail. The jails merely mount it either read-only using nullfs or mount unionfs filesystems on top of it. Any local jail modifications written to the unionfs filesystems mounted on top of the template, keeping the template pristine and untouched.

To update the template my installworld and pkg install/upgrade chroot into it to update any software. By updating the template all jails that use the template are automatically updated when it is.

To repeat, configurations and other customizations are written to the unionfs filesystems that are mounted on top of the nullfs that points to the template.

The root account in the template has an invalidated password. There is no way to su or sudo to root in the jail. To become root in the jail I log into the server itself, become root and jexec into each jail. Or change the template to apply changes that affect all jails. I use ansible to manage most configurations and packages in the jails. I have a script will installworld and etcupdate the template, affecting all jails simultaneously. It's automated using the script and ansible.
 
Back
Top