jails and ansible

This is rather an open question.
I have a jail, i have ansible installed.
To which use can , or do you use ansible, to do what, in the jail.
Maybe you have interesting uses.
 
I once had all of our infrastructure services (e.g. dns slaves, dhcp, radius) set up and managed via ansible.
Each type of service was defined in a playbook, config in git repositories (some with branches for each instance or site if necessary) and e.g. the jail/zone setup boilerplate was defined in other playbooks.
This way the whole installation of a jail or zone for such a service could be performed fully automated. Some jails even ran ansible in pull-configuration (via cron) to automatically pull config updates and e.g. DNS entries and firewall rules got updated upon setup of the new service.

At the time I thought this is a good strategy for disaster-recovery as I could basically spin up all essential services fully automated within a few minutes. OTOH our infrastructure isn't THAT huge and with template jails and configs that reside in git repos anyways, doing it manual won't take that long to justify the overhead and extra time I often had to spend to get/keep this working. Especially because a lot of FreeBSD/Jail/smartOS/zones-related modules had inconsistencies, quirks, bugs or were simply broken and more than once a simple "just add this small thing to the playbook" ended in hour-long bugfixing sessions or even complete rewrites of modules (usually in shell or perl because I absolutely hate Python...)
Plus with basejails and zfs snapshots backups are extremely cheap and simple and just as fast to restore, so my motivation to keep/get this working dwindled more and more...

In theory ansible (or any other orchestration/configuration management system) is really nice to automate and at the same time document everything in your network. If you have a lot of boilerplate stuff on a daily basis going on, this stuff truly shines and will save you a lot of time and prevent errors.
If you have a relatively static network and server landscape, and most servers and service installs are unique, I'd say just install and manage them by hand and put configurations in version control (e.g. git) together with their documentation. Even if you have to reinstall some of them once or twice a year, this is still quicker than the hours spent building and maintaining an orchestration system.
 
OTOH our infrastructure isn't THAT huge and with template jails and configs that reside in git repos anyways, doing it manual won't take that long to justify the overhead and extra time I often had to spend to get/keep this working. Especially because a lot of FreeBSD/Jail/smartOS/zones-related modules had inconsistencies, quirks, bugs or were simply broken and more than once a simple "just add this small thing to the playbook" ended in hour-long bugfixing sessions or even complete rewrites of modules (usually in shell or perl because I absolutely hate Python...)
Plus with basejails and zfs snapshots backups are extremely cheap and simple and just as fast to restore, so my motivation to keep/get this working dwindled more and more...

I setup my whole environment around ansible, when I actually needed to rebuild things my ansible scripts always needed to be reworked and I would always find
more problems to fix with ansible. I converted everything back to basic shell scripts and using 'Bastille files' and they've been far more stable for me but they also have their own trade offs.

As for what I did with ansible: I run most my services in jails, samba, nginx, matrix synapse chat. I used ansible to automate/document each services setup as well as for making
test enviroments. I still do the same but I use a basic install.sh script for my host system and jails I converted to using bastille for jail automation.
 
  • Thanks
Reactions: sko
Thanks for reminding me of Bastille!
IIRC I took a short look at it a few years ago when it was in a very early stage and completely forgot about it. This looks really nice now - extremely simple and effective and easy to integrate in/switch to from my current workflow. I think I'll give it a try on one of my jailhosts.
 
At the time I thought this is a good strategy for disaster-recovery as I could basically spin up all essential services fully automated within a few minutes.
Oh man, I was struggling a few years ago trying the same things you mentioned, automating a whole environment with Ansible but using Linux. I went insane a couple times trying to debug weird situations with Ansible (complex Jinja templates and assertions and such)
inconsistencies, quirks, bugs or were simply broken and more than once a simple "just add this small thing to the playbook" ended in hour-long bugfixing sessions or even complete rewrites of modules
I want to switch to FreeBSD because i started to dislike Linux and in my mind the same kind of "global" automation was the way to go, only using Ansible would be much better. Turns out that you are right, Ansible is very powerful but you will go insane trying to solve things that don't need solving yet. My motivation is also dwindling because writing roles is quite easy, until you want to do a slightly more complex thing and you will spend hours changing a line and testing.
If you have a relatively static network and server landscape, and most servers and service installs are unique, I'd say just install and manage them by hand and put configurations in version control (e.g. git) together with their documentation. Even if you have to reinstall some of them once or twice a year, this is still quicker than the hours spent building and maintaining an orchestration system.
This is a nice way to see it. You might not need ansible to copy over a few configuration lines. One could setup a desired environment and configuration files on a test jail, copy everything to a central repository, and at most, use a simple shell script to copy the files and run commands.
 
Oh yes, ansible...

Well, there's nothing left of that in our infrastructure now. Instead configuration of our "boilerplate" network services (dhcp, dns, firewalls/pf, bgp) are streamlined to require only minimal changes from one branch to another and also require minimal changes to a standard jail. All hosts are basically VM/jailhosts with identical jail/vm-facing network configuration (i.e. the same bridges everywhere).
This way configuration can be either exactly the same in all branches (e.g. pf.conf), uses site-specific includes or follows the same basic configuration which is kept in a master git branch and the differences for each site are in their own git branch which.

All those services run in jails that are completely interchangeable - created with the exact same iocell command and checking out the same git repo (maybe another branch). So they can be easily and quickly rebuild or moved to another host, where only the hostuuid needs to be changed to boot up the jail again.

Hosts are also streamlined - apart from some slightly different configuration (CPU and amount of RAM) they are all the same supermicro dual-node systems. So even the hardware is interchageable in case something breaks down.

Additionally pretty much everything is being snapshotted and the pools replicated to a backup server and then to another off-site storage server multiple times per day.

So recovering from various failures reaches from simply restoring some files from a snapshot, zfs send|recv single datasets or even whole pools, up to pulling out node from server A and putting it into server B to quickly restore a server with non-redundant VMs/jails on it.
For infrastructure there are 2 nodes that run almost all services in redundant configuration (either at service-level or via CARP). So I can easily upgrade/reboot even the whole node during working hours with no noticeable impact.


Yes, recovery has to be completely manual, but the actual effort is rather minimal. Most of it *could* be automated again, but TBH I would spend more time building & testing that automation than I would need to restore even a large part of our servers...
 
Oh yes, ansible...

Well, there's nothing left of that in our infrastructure now.
Isnt it interesting that in your previous post back in August 2021 you were migrating away from Ansible due to the complexity of managing an entire infrastructure using that tool, and now three years later you don't even wanna remember using it? Haha

I am learning FreeBSD and looking into a decent way of managing a very very small amount of hosts, just for my personal homelab. My idea is that one day after learning all the intricacies about it, I will make the proposal to switch to FreeBSD, but it will take a long time.

Thanks for sharing the philosophy of the way you are automating FreeBSD. I haven't researched yet other tools like iocell but I hope I will one day use it. It makes sense that because FreeBSD is such a traditional UNIX OS with Academic and University history that the best way to automate it and deploy it is also being very traditional and using base and standard tools.
 
Isnt it interesting that in your previous post back in August 2021 you were migrating away from Ansible due to the complexity of managing an entire infrastructure using that tool, and now three years later you don't even wanna remember using it? Haha
I also used debian on servers many years ago - but we all get wiser in the course of time ?


I am learning FreeBSD and looking into a decent way of managing a very very small amount of hosts, just for my personal homelab. My idea is that one day after learning all the intricacies about it, I will make the proposal to switch to FreeBSD, but it will take a long time.
I'm basically running my homelab as a smaller version of our company infrastructure. For one it keeps me sane to not constantly having to deal with completely different network layouts, configuration style, VLAN numbering etc, and secondly I can easily transfer things from one network to another - e.g. I started to migrate the local gateways from OpenBSD VMs to FreeBSD jails, which I first tried and benchmarked on one of my home gateways and then rolled out at the company network. It's also often the other way around e.g. if I stumble over a weird edge case at work (e.g. I just figured out that HW-offloading on 40G mlxen0 on a bhyve host breaks traffic for windows machines in one direction - but not for any BSD machines. Great to debug if you can't use the OS that actually *has* the capabilities to debug networking issues because it doesn't show that problem :rolleyes:)


Thanks for sharing the philosophy of the way you are automating FreeBSD. I haven't researched yet other tools like iocell but I hope I will one day use it. It makes sense that because FreeBSD is such a traditional UNIX OS with Academic and University history that the best way to automate it and deploy it is also being very traditional and using base and standard tools.
If you want to try iocell I'd sadly have to recommend using the 'devel' branch from github and pulling in all PRs from at least this year.... The maintainer abandoned the project and my offer to step in petered out.
I'm still using it everywhere and have a bunch of local patches (those PRs + some more) on my poudriere buildhosts, which fixe some bugs/limitations and also add various new features (e.g. vnet interface passthrough). I also have several bigger additions (and rewrites of some portions of code) in the works or already running, which I plan to integrate into either a new version in my own repository or in a fork for which I'll then want to create a port.
So tl;dr: iocell from ports sadly is pretty much dead right now.
 
I also used debian on servers many years ago - but we all get wiser in the course of time ?
What is it about debian that makes you dislike it? I started using linux 7 years ago and my favorite distro (at the time) was CentOS... then RedHat killed it and then I switched to Debian. It's the most traditional of all the Linux distros in my opinion, but I also started to dislike it. It seems to me like it comes with too little configured, but it does come with all the modern Linux tools that I started to dislike, like ip, systemctl.
I'm basically running my homelab as a smaller version of our company infrastructure.
I would go a little bit insane if I tried to replicate the infrastructure of my workplace haha, too much spaghetti everywhere. I know FreeBSD is very powerful at networking, but I have yet to try it out as a basic switch/router. I haven't even figured out how to handle networking in a host and the jails inside the host in a way I like it.
 
What is it about debian that makes you dislike it?
the absolute dumpster fire that was/is systemd and all the software and mindset (and people...) it drags along and the half-assed integration of ZFS or pretty much everything... linux isn't a coherent OS and what pretty much all distros make of it is a patchwork of fast moving targets and ancient cruft. They usually also tend to focus more on endless philosophical debates rather than producing working code...

I've been running debian from 2.2 with 3.0 being the first I actually used on 'production' servers, 6.0 was the last and devuan jessie was then used to keep some hosts alive until everything was migrated/rebuilt to FreeBSD, OpenBSD and illumos. That was ~10 years ago and I never looked back... The handful of linux VMs/appliances I have to deal with occasionally are the best reassurance for me that this was and still is the best decision.
 
Using sysutils/py-pyinfra, not ansible, I did automate setup and maintenance of jails completely. The idea is to ease recreation of the jails in case of system upgrades, to the point of changing some parameters and re-running the scripts.

But it's a lot of work, probably takes some minor and another major system upgrade to amortize.
 
the absolute dumpster fire that was/is systemd and all the software and mindset (and people...) it drags along and the half-assed integration of ZFS or pretty much everything... linux isn't a coherent OS and what pretty much all distros make of it is a patchwork of fast moving targets and ancient cruft. They usually also tend to focus more on endless philosophical debates rather than producing working code...

I've been running debian from 2.2 with 3.0 being the first I actually used on 'production' servers, 6.0 was the last and devuan jessie was then used to keep some hosts alive until everything was migrated/rebuilt to FreeBSD, OpenBSD and illumos. That was ~10 years ago and I never looked back... The handful of linux VMs/appliances I have to deal with occasionally are the best reassurance for me that this was and still is the best decision.
When I started with linux I think systemd was just rolling out, so I never knew about service. It frustrated me because I would look online for help and tutorials and there was very little guides that used systemctl, so that was my first experience. After many years, I can say that I haven't seen any major improvements on the management of services using systemd. All the times I tried to setup a systemd unit service I ran into trouble, and I hate how it tries to manage other kind of units like mounts and so.

Another one of my major frustrations was when one company I worked at decided to use Ubuntu for everything, and I believe just then Ubuntu had changed to use netplan for managing networking, and oh boy, it is very very bad. There are so many attempts to manage networking in Linux, and I am very happy to use FreeBSD's classic ifconfig and seeing it just work immediately.
 
Using sysutils/py-pyinfra, not ansible, I did automate setup and maintenance of jails completely. The idea is to ease recreation of the jails in case of system upgrades, to the point of changing some parameters and re-running the scripts.

But it's a lot of work, probably takes some minor and another major system upgrade to amortize.

I use a script (essentially a crude one-liner) to extract all jail properties that differ from the defaults and feed that back to iocell to create a new jail with the same options. @work I have a properties-template for all infrastructure-jails stored in the same repo as its configs (e.g. for dhcp or nameservers).

As said: I could do the final step and piece everything together to fully automate it, but given that I usually rebuild/reinstall less than a dozend jails per year, and then often incorporate some changes to its setup, it's really not worth the time. Even major release upgrades usually are so uneventful (for hosts and jails), that I don't even recreate jails for such upgrades. Major version upgrades e.g. for postgreSQL are also relatively easy to handle in a cluster and in case of fire a jail is simply rolled back to its previous ZFS snapshot.

TBH if I look at the usecases of ansible, puppet et al in the linux-world, where it is considered "normal" to just constantly roll out new, pre-built docker containers as a means to update a service (usually because of an absolute nightmare of dependency and non-default configuration requirements that can't be met by any sane package management...), I fully understand why they have to automate the sh*t out of this procedure or nobody would want to use it.
 
I am still undecided how I want to setup my homelab with some style of automation for cases when I want to start over fresh (either the host or a testing vm) or an eventual mayor FreeBSD update.

One layer is setting up the FreeBSD host. I am still unsure if I should use ansible or if shell scripting would be more useful in the long run. I am more biased towards shell scripting honestly since I always prefer to work with the base tools. There are a few cases where I am more used to using Ansible, but I feel that once I get started with a bash script infrastructure that would go away.

My setup isn't that terribly complex either. In my mind I would have one script for the FreeBSD host itself and needed essential services and then one off scripts for setting up a jail service. There is so much to do that it's hard to get started haha
 
One layer is setting up the FreeBSD host. I am still unsure if I should use ansible or if shell scripting would be more useful in the long run. I am more biased towards shell scripting honestly since I always prefer to work with the base tools. There are a few cases where I am more used to using Ansible, but I feel that once I get started with a bash script infrastructure that would go away.

My setup isn't that terribly complex either. In my mind I would have one script for the FreeBSD host itself and needed essential services and then one off scripts for setting up a jail service. There is so much to do that it's hard to get started haha
Coincidence. I was having the same thought. I started by creating a script to allow me to create jails using jail.conf because I thought once I get some nice scripts to get my jails setup I can use those to create a master "FreeBSD setup script" for myself if I ever wanted to install fresh. I got some nice jails setup scripts now, and I have enough to start hacking up a 'FreeBSD setup script' but I haven't gotten around to that part yet.

Fairly crude but you can get the idea.
 
Back
Top