FreeBSD Jails vs Docker - A Scientific Approach

Last week someone told me they wanna start to use Docker in a enterprise environment and I justed didn't asked for further details, because my mind was off doing something else at that time and I went on with my business. Today, for whatever reason, that chat came back to me and itched me the whole day. So I started a bit researcher Docker as I'm not familiar with the technology, sure I read about it in the news and I heard people talk about it, but I use jails so I never gave it 2nd thought till today.
My google search: freebsds jail vs docker

3rd result:
Jails vs Docker - A performance comparison of different container technologies - Christian Ryding & Rickard Johansson

So I read that, well because I thought two young eager computer engineers willing to travel to uncharted land. To my surprise this paper is rather "thin", skinny comes to mind, regarding technical details.

I'm thinking about recreating that paper, BUT with a proper tweaked FreeBSD to make it a fair street fight and collect all the technical details here, just for the fun of it. 😆
 
Actually quite a good paper.

The conclusion was generally sound too. "Jails and Docker are very similar but due to hype generated benefits; go for Docker" ;)

Figure 8 was useful however; especially with the whole point behind fast container spinup.
 
If you want to have more fun: Look (deep) into security related aspects.
I haven't looked at docker the last two years. Surely the situation somewhat changed but maybe also not...

Also, lets not forget another aspect that is important to many: Docker images are somewhat "cross platform portable".
This is of course not important to many of us here, but it certainly is to some docker users.
 
Also, lets not forget another aspect that is important to many: Docker images are somewhat "cross platform portable".
This is the bit that I strongly question. Docker images only work on a single platform. Linux (99% amd64).

Needing to provide an amd64 Linux environment to run the image via virtualization is completely the opposite to cross platform / portable.

But we all know that because it becomes painfully obvious when we use FreeBSD. But explaining this to the masses is unproductive.
 
This is the bit that I strongly question. Docker images only work on a single platform. Linux (99% amd64).
Hmm... yeah. As mentioned: I don't know much about docker.
I know that one of our customers was really into that. But I think they also used some 3rd-party "enterprise" tool. I wouldn't be surprised if that was "just" docker "launcher" with an integrated VM so the same image could be ran on Windows and MacOS.

Don't listen to me on docker related aspects :p
 
I was curious about this paper as well, I came across it today and then saw this post appear on the forms.

I found figure 6 to be the most concerning to me as the disk read performance dropped off so quickly. I can't help but wonder what the reason for this could be, maybe an un-tuned ZFS setup. Also the script that they used for their testing link here ionugget.py for anyone who wants to read through it.

From what I can see it would seem that they use a 1024Kb for writes size and 512Kb for reads, but no info about what the zpool and dataset had for settings in the paper or the author's git repos.
 
Hmm... yeah. As mentioned: I don't know much about docker.
I know that one of our customers was really into that. But I think they also used some 3rd-party "enterprise" tool. I wouldn't be surprised if that was "just" docker "launcher" with an integrated VM so the same image could be ran on Windows and MacOS.
IIRC, docker already contains the ability to run a "foreign" image virtualized, so it's more or less transparent to the user. Except of course if your host is already a VM and you run into the typical "nested virtualization" issues for that reason. 🙈

I'm personally not too fond of such hidden complexity. If you need virtualization for your usecase, this should be implemented explicitly, by a different tool. But maybe I'm getting old...
 
Well, it's a benchmark, so always a little bit synthetic.

The driving force why many people do use Docker is the promise of "doing more in less time", also that it doesn't matter on which Linux distribution you are running the Docker image, because the infrastructure is the same across all platforms and the image is standardized. It's using a standard base system (like Ubuntu) and that's it.

A good example for Docker's use is Discourse. The open source version of that forum engine is only offered as Docker image, because for the makers it really simplifies support questions a lot. That image even comes with builtin update. You still can install the source and all components by yourself, but this takes much more time and with updates/support then you're on your own.

This is basically how to install its docker image:

Code:
sudo -s
git clone https://github.com/discourse/discourse_docker.git /var/discourse
cd /var/discourse
chmod 700 containers
./discourse-setup

...and then some web based stuff.

This is how to install that thing manually:

So much more complexity. And this is why Docker became such a big thing, with all good and bad consequences involved.
 
IIRC, docker already contains the ability to run a "foreign" image virtualized, so it's more or less transparent to the user. Except of course if your host is already a VM and you run into the typical "nested virtualization" issues for that reason. 🙈

I'm personally not too fond of such hidden complexity. If you need virtualization for your usecase, this should be implemented explicitly, by a different tool. But maybe I'm getting old...
Docker Desktop is a package that does it all for you - the name perhaps suggesting that it's aimed at Desktop folk who don't want to get down into the weeds. It will create a VM (which you can tweek various parameters in the Docker Desktop settings), and IIRC will install some of the Docker commands for you.

The actual sysutils/docker tool itself doesn't have the functionality to create a VM. It expects you to configure it to talk to a unix socket/tcp port where a compatible daemon is listening and will facilitate running the image in whatever runtime and environment is configured there.

So it depends what route you take as to whether you get "magic" or not, but even if you choose Docker Desktop, I'd argue you've got to be pretty ignorant to not see that there's a VM running - or maybe it's only me that pokes through the settings of a new application when I install it...
 
Further comment. It would bother me a lot seeing all the "Docker is Dead" articles around for many years but now people on FreeBSD are wanting it. Seems Google wants everyone to use Kubernetes so all the cool kids start using that cause Google told them to.

A lot of this seems more relevant to those with large systems and I know some here have large or large-ish systems they manager but I think most people who start using these things don't need to.

I stopped worrying about such things about three years ago so I probably don't know what I'm talking about.
 
sysutils/docker-machine port says "tool to create docker hosts"
A way that we've found docker useful at work is "build environments". You know how we all have preferences in what makes the perfect workstation? Well if you are working on a project with a bunch of writing software you can run into "well it built and ran fine on my system,not sure why it crashes on yours".
So create a docker container that has the build environment (should also be close to or the same as the runtime environment) that everyone uses to actually compile test in before checking into the SCM tool. Gets rid of a lot of headaches. It also can serve as the basis for git CI builds and if you do it right, also lets you create a debug environment when you have to look at core files for a released image.

Is Docker really any better or any worse than other virtualization techniques? Maybe, Maybe not. Depends on exactly what your requirements are, what is available.
 
but I think most people who start using these things don't need to.
I find it isn't really Docker that people care about; it is DockerHub that they really want. Nice clickable packages that hide complexity.

The question is; are people lazy / careless so they can just spin up a "service" with who-knows-what default access settings or is it that people would be unable to set it all up themselves anyway*?

* I am not sneering at those who can't setup big services. There are numerous occasions where I have had to give up on some. Usually due to the sheer number of dependencies and other mess that developers drag in these days.
 
Further comment. It would bother me a lot seeing all the "Docker is Dead" articles around for many years but now people on FreeBSD are wanting it. Seems Google wants everyone to use Kubernetes so all the cool kids start using that cause Google told them to.

A lot of this seems more relevant to those with large systems and I know some here have large or large-ish systems they manager but I think most people who start using these things don't need to.

I stopped worrying about such things about three years ago so I probably don't know what I'm talking about.
Docker might be dead (or dying), but that doesn't mean Linux containerisation is. A lot of the work that Docker did was given to the Cloud Native Foundation (whose parent is the Linux Foundation), and so much of the methodology is now a "standard" and there are multiple tools implementing it, of which Docker is but one.

The latest version of Kubernetes removes support for the Docker runtime. At work we host multiple Kubernetes environments and in theory Docker isn't involved at all (some engineers still use the docker cli tool (sysutils/docker)).

I think I'd agree with you about size. For example, we hosted Jenkins CI with Kubernetes worker nodes because it was fast, scalable, and easy, allowing us to go from ~500 engineers to several thousand with basically no configuration changes. If you're just hosting a website then it's massively overkill.
 
Also, lets not forget another aspect that is important to many: Docker images are somewhat "cross platform portable".
This is of course not important to many of us here, but it certainly is to some docker users.

why its not important to many of you here ? don't you want to expand the number of tools available here ? don't you want always more tools to integrate even better more operating systems together ? I don't want to think that the freebsd users are part of a "religious" closed circle. I'm an hobbyist and I do easy tasks,but I want that a lot of these tasks help me to use different os on the same machine using a lot of tecniques and tools.
 
why its not important to many of you here ? don't you want to expand the number of tools available here ? don't you want always more tools to integrate even better more operating systems together ?
In theory that sounds good but in practice tools like this just make a mess. And bugs, questions and security issues that will arise from this mess will need to be addressed by the community and developers. This all takes time from the things that matter more.
 
The question is; are people lazy / careless so they can just spin up a "service" with who-knows-what default access settings or is it that people would be unable to set it all up themselves anyway*?
I am a heavy oci container user. I am not lazy, but my days only have 24hours like everybody else's and I need to push our software out to customers on time. However, when my contractors want to use a certain software stack the following consideration arise:
  • if we use the software we have to trust the vendor - the vendor knows best how to install, configure and lock down that software. why should I invest hours over hours just for installing it - this is wasted time and error prone. Furthermore I can omit setting up a whole infrastructure for building my reproducible builds because they are provided by the manufacturer - no problem because my library/compiler etc. does not match the previous build. If we want to really go into details we simply inspect the container, if we find anything suspicous in that container it is probably worth to investigate further and rethink if we can trust that vendor.
 
and lock down that software
That is fair but this part; from all the docker images I have seen, they are all quite open; to ease use and interoperability.

One of the difficulties of precanned docker images is you need to deploy a little bit of guesswork to close all the holes because you weren't the one that set it up.

Docker images tend to feel more like demo VMs rather than something I would actively choose to put in production.
 
why its not important to many of you here ? don't you want to expand the number of tools available here ? don't you want always more tools to integrate even better more operating systems together ? I don't want to think that the freebsd users are part of a "religious" closed circle. I'm an hobbist and I do easy tasks,but I want that a lot of these tasks help me to use different os on the same machine using a lot of tecniques and tools.
I don't think I would use docker for my person projects or my servers, but it might be nice to have it for FreeBSD. If nothing else it would allow people with very little knowledge to start playing around with the OS without spending a month reading documentation/books on jails and PF before they could have a usable system.
 
I don't think I would use docker for my person projects or my servers, but it might be nice to have it for FreeBSD. If nothing else it would allow people with very little knowledge to start playing around with the OS without spending a month reading documentation/books on jails and PF before they could have a usable system.

I don't know how to define this kind of behavior. I see in a lot of FreeBSD experienced users. It could be defined as "we are enough for ourselves. We have jails,it works good,we don't need dockers. We don't need a lot of different tools,since we have similar and better tools within the FreeBSD ecosystem". This approach is partially great,because while in the world the system admins want to use tools widely used they grow,they have the chance to grab money from companies and this will attract more developers. But if every FreeBSD sys admin think to don't have the needing to know different tools,methods,even different OSes,the FreeBSD devs will always remain few, at least until they open up to other ecosystems. Is being few people bad ? In part I think so, because the system grows little if the developers are few and if there is little money and the improvements will come later,so we should deal with a lot of bugs and lack of tools that may help to work better.
 
Actually quite a good paper.

The conclusion was generally sound too. "Jails and Docker are very similar but due to hype generated benefits; go for Docker" ;)

Figure 8 was useful however; especially with the whole point behind fast container spinup.

Hmm is that so 🤔, I miss the following points:
- CPU choice: FreeBSD has been targeting Intel x86 since the beginning, AMD is fairly “new”
- memory: no word about possible differences in the memory management system, or perhaps they are the same, anyway u wanna compare them u have to describe them 🤷‍♂️
- OS specific: standard installation, applied customization, what has been done to compensate for two different operating systems, or had been done anything at all? 🤷‍♂️
- choice of file system: during the writing of the paper ZFSoL had been stable and production ready the issue was and has always been the license and in particular the Kernel interface, hasn’t it been 2019 they changed the api which broken the zfs module and Linus came out saying people shouldn’t use it anyway. That being said, the paper goes with two filesystems which couldn’t be more apart. Either it’s FreeBSD ZFS vs ZFSoL, or it’s UFS2 vs. Ext4. Taking ZFS is at least a questionable choice here.👀

Then there is the paper discussion in particular the question:

What unexplored benefits Docker and Jails can have by implementing each other’s unique features?

The answer according to the paper can be summarized in:

One size fits all is better then the right tool for the right job.

That’s more an opinion then anything else. 🤷‍♂️

If that paper would appear on my desk, that are the points I would wanna discuss first. After all what I want as an Engineer is the best outcome, best as in (cost, performance and time, the order is alphabetical 😉).

And just for the record the time frame for that paper was 10 weeks, I would expect at least a little bit more depth then what has been provided here. That’s work of two weeks, perhaps three.
 
The actual sysutils/docker tool itself doesn't have the functionality to create a VM. It expects you to configure it to talk to a unix socket/tcp port where a compatible daemon is listening and will facilitate running the image in whatever runtime and environment is configured there.
No wonder given the fact that development on the FreeBSD port of Docker stopped around 7 years ago. Yes, there was once an effort made trying to get Docker run on FreeBSD.

The problem with that is that Docker relies on Linux only kernel features like name spaces (were supposed to be introduced in FreeBSD 12, did that happen?) or control groups. So in order to get it running you've got to rewrite these portions of docker to work differently, but do the same.
 
Back
Top