general/other Freebsd way into the clouds

Freebsd way into the clouds.
The most optimal way of hosting your services in cloud depends on needs of those services.
And because of that, there is no "best" solution (for everyone), there is a solution that suits *your* needs the most.
So, what i will try to do in this article, is to just list the different solutions and approaches of hosting services in cloud using Freebsd.
As i have been only working with AWS as cloud provider, this article will be AWS specific, but the concept of solutions provided should be the same for other cloud providers.
The main reason of this article, is really to just start conversation with interested people about using Freebsd in cloud, approaches of doing so, solutions, ideas, dreams and etc...

Possible ways of hosting in cloud

I know that it sounds quite obvious, but firstly lets have a small look into the possible ways of utilizing cloud providers to host our services:
I) Bare-metal
You can have an Freebsd OS up an running on a "bare/clean" hardware of the cloud-provider. This approach is basically the same as if you would host services on your own hardware (that you would rent), with some additions:
+ you always can "kill" the server, and stop using it, so you basically rent out the hardware on the basis of your present needs.
+ You get access to use some additional services provided by the cloud provider, that work only with cloud provider own hosted hardware.
- If you dont need 128GB of RAM, or 64 thread processor, then this approach would be using resources non-efficiently, as cloud providers (AWS) give as an option of "bare-metal" only such powerfull servers (and
so the price is high - ~5$ an hour).
II) VM
You can use VM's in cloud.
AWS provides you an service "EC2" to get access to virtual machines with Freebsd as OS.
+ VM's are more secure way of hosting services than the bare-metal and container approaches. (Thought, that might be debatable?)
+ - microVM's might be as fast as containers (debatable?).
III) Containers in VM.
By the containers hosted in Freebsd, of course, i mean jails.
There is a number of tools for managing jails, but about them later.
+ containers are the fastest way of creating new instance with your new application (with the interest in microVM's rising... debatable?).
+ Scalable in fast manner (in difference with VM's and bare-metal)
+ more effecient usage of hardware (than VM, because of hypervisor? might be also debatable i guess.)
- less secure than VM's (on avarage?)
IV) Containers in bare-metal
By the containers hosted in Freebsd, of course, i mean jails.
Depends on your needs, you might would want to host containers not on the VM, but on the bare-metal.
That might be a case, if you would want all of the containers to be on one physical instance.
- More expensive
+ might be the only applicable approach for some of the jail-management tools (if you really want to use those).

Jail management tools (with orchestration)
Now i will try to list jail-management tools, possible orchestration ways, and my humble overview of them:


1) Firecracker with Freebsd microvm's + kubernetes

Firecracker with Freebsd + containerd + kubernetes in all of my testings did not work.
I have spent a lot of time and brain cells on finding out what, where and how it even should be working.
By connecting information from several tutorials, i was able to come close to make it work in one of the possible approaches, but now it requires some changes to one of the tools (kata-containers) code (in golang).


If to be more specific:

Firstly, i went trough this tutorial , which uses "Firecracker with PHV support" provided by Colin Percival here , to setup Firecracker linux host, to host Freebsd VM's.

What i have tried then is to connect: Firecracker-Freebsd, Firecracker-kata'containers, kata'containers-containerd, containerd-kubelet, by the tutorials provided by kata-containerd authors, those mentioned at the end of this topic.
The issue is in kata-containers code, as it needs to generate specific configuration file when calling Firecracker, i have created ticket, and got pointed to the code, where i can modify it, to build my own custom kata-containers, that maybe will work. But, by that point i would say that this approach is not very ready. **All of the tutorials, on how to use kata-containers with containerd, EVEN ON ON THEIR OFFICIAL GITHUB, are outdated, so they dont work out of the box (that made me to lose some brain cells for sure), so be ready to suffer if you go this path...

There is another approach/project to test out though, that i for some reason missed - https://github.com/firecracker-microvm/firecracker-containerd/blob/main/docs/getting-started.md .
As far as i have understood, Firecracker creators themself have created a "runtime shim" to make Firecracker to work with containerd, which might work with the Firecracker that works with Freebsd.
In the end, even by just following the guide provided (above) in github by the creators of this containerd-firecracker-shim i was just receving an error:

temporary vsock dial failure: vsock ack message failure: failed to read \"OK <port>\

Which i was not able to fix, as it is claimed that the project is in "very early state" of development, i would guess that the documentation is not up to date (at least, at the time when i was doing my tests).

Creating Firecracker that would work with Freebsd : https://morezerosthanones.com/posts/firecracker_freebsd/
Firecracker - kata-containers : https://github.com/kata-containers/...ow-to-use-kata-containers-with-firecracker.md
kata-containers - containerd : https://github.com/kata-containers/documentation/blob/master/how-to/containerd-kata.md
containerd - kubernetes : https://github.com/kata-containers/...to/how-to-use-k8s-with-containerd-and-kata.md
Firecracker - containerd : https://github.com/firecracker-microvm/firecracker-containerd

In the end, this approach looks very promising, but:
* Firecracker should work on the Freebsd as host.
* Firecracker should work with containerd.
* We would need to test Freebsd "microVM's" performance, and be pleased with the results.
If all of those above would come true, not only this tool would be very useful for on premises, Freebsd also would be able to fight for a place in clouds :) .

2) CBSD+puppet
+ puppet and bash = stability, good support, experience.
+ easy to add scripts/modules on top of it, as it uses bash + puppet + ssh (https://www.bsdstore.ru/en/node_cbsd.html) to work with client nodes.
+ community involvment looks overall good
- very... very bad documentation (most of the articles or videos are in russian, and then poorly translated into english)
- images... are not images in sense of "box with everything preinstalled", it is more an "script that builds that box we call container". https://www.bsdstore.ru/ru/articles/cbsd_puppet_jail_images.html
- No integration with different cloud providers services
- No market would want a product with such tool being in use (as... overall with BSD? if of course you are not hosting it yourself and customer does not care how you do it).

As i see the approach using this tool:
1) Creating EC2 AMI for puppet server and client - puppet server AMI should have puppetserver installed, and client should have puppet agent + CBSD module + CBSD itself installed.
2) Using those AMI and terraform code create infrastructure
2.1) Call terraform, which will build EC2 instances, connect puppet server and client, move puppet manifest, call puppet run on client, which will contain CBSD code to create needed jails?
3) Using puppet code, call for the agent's to create jails, with your applications inside of it.
4) To update... run puppet
3) Nomad + pot
+ thats what the Yan Ka Chiu recommended to use in his EuroBSDcon 2022 talk \-\ https://papers.freebsd.org/2022/eurobsdcon/chiu-freebsd_containers_in_production/
+ Jails are Scalable and orchestratable with the help of Nomad and Consul
- pot is not OCI compatible, which means we will need to create very "pot" specific images, which means that if we will want to move to some other "jail driver"/"container runtime" we would need to create new images.
+ Nomad, Consul and pot seems to be getting proper support. (nomad is hashicorp product (terraform guys), and pot was lastly updated 3 months ago... good?)
As i see, this tools being in use in cloud (and i actually tested it, worked cool):
1) Create an EC2 AMI, with Nomad and pot installed in it
2) Create an pot image, with application, binaries, whatever you need preinstalled, host it whenenever terraform will be able to get them
3) Create terraform manifest, that would build EC2 instance, start nomad client and server, and will push nomad job with the pot-task-driver.
4) Run terraform
5) Ta-da, you got an EC2 Freebsd instance, with jails in them, with applications running inside of the jails.... cool.

4) iocage
- Jails are "state preserving", and we dont want them to be like that (by Yan Ka Chiu in here )
- No possibility to orchestrate multiple instances at the same time? and so poor scalability?
- Python dependency
- not supported anymore? latest release 2019 - https://github.com/iocage/iocage/releases
5) BastilleBSD
- Jails are "state preserving", and we dont want them to be like that it to be like that in cloud (from the words of Yan Ka Chiu). In 2022 EuroBSDcon talk Yan Ka Chiu said "Bastille is kinda out of question" (as i have understood, he meant it is out of question for using it in cloud --_(-_-)_-- ) - here (we really need to work on sound for BSDcons, cant understand half of the things )
+ open for community input
+ they have bastille file, which is kinda the same as docker file, which is cool.
- No orchestration/clusterization capabilities (for now?), asked in this git ticket.
6) runj
- not production ready (as stated by the author https://github.com/samuelkarp/runj#runj).
- Personal project (does not *really* accept community input)
+ OCI compatible (will work with kubernetes). Works with "containerd", which in its turn is CRI compatible which means it works with kubernetes... cool.
7) XC (by Yan Ka Chiu)
- not production ready (as stated by the author).
+ Open for community input.
+ Created by the Yan Ka Chiu guy, who had at least 2 related to "containers in Freebsd" talks at euro EuroBSDcon, and at 2023, when he have presented it, the functionality looked very cool for me - here
- There is no orchestration tool that works with it? Not CRI compatible, and as i remember, when i asked him about CRI support, he stated that it is *not* in short term plans.
+ OCI compatible, which means (most of?) linux docker images will work with this tool.
∞)(insert tool name)
There might be even more of them, and there will be in future, but more time needed to look at all of them, and even more time to actually understand what is the difference between all of them. This article is not a "truth stuck in static", this article is a process, which will get more and more informative with the flow of time (hopefully?), with the input from community, with development of mentioned above tools, and maybe inventions of new ones.


The end
What are your thoughts? What are the tools that i have not mentioned, should be mentioned? At what tools i should take a look? What do you think of taking Freebsd into the clouds as an idea? Maybe someone already did it, and might want to share experience?

P.S
1) some "non biased"? comparison from BastilleBSD website https://bastillebsd.org/compare/ (includes 2022 info only?)
2) Some person claims that by 2023 only BastilleBSD, pot/nomad, CBSD, and plain jail.conf(5) nad jail(8) can be used. https://forums.freebsd.org/threads/...jail-cbsd-pot-iocage-ezjail.86656/post-596707 , and other options are ... out of option, because of "not being updated for long time", tho i have seen somewhere - some person, saying that ezjail is not "left without updates", but it is just already finished project with everything already implemented.. heh.
3) There is a chart of Freebsd jail manager life/timeline by 2023 https://www.bsdstore.ru/img/freebsd-jail-chart-2022.png
4) If you are interested in topic of this article, check out this talk on EuroBSDCon
5) All things written above are not "the ultimate truth".
 
1) Creating EC2 AMI for puppet server and client - puppet server AMI should have puppetserver installed, and client should have puppet agent + CBSD module + CBSD itself installed.
I would like to correct some inaccuracy in the case of using cbsd + puppet - the puppet server is optional.
Puppet agent allows you to apply the module locally, which is what cbsd takes advantage of - you can use any puppet modules (compatible with FreeBSD, e.g.: sudo, crontab, nginx, php, accounts, pkg, elasticsearch, postgresql, mysql, grafana, redis, memcached, ..) to (re-)configure services in containers (or container settings) and you do not need to install puppet (+ any puppet deps) inside each container (CBSD mounts puppet-agent + modules into jail via overlay/nullfs). The API version of the cbsd allows you to proxy/pass any parameters for puppet modules. Thus, you get a programmable (via API or shell/cli) cloud with a regular YAML file, e.g. (real example):
Code:
# global
crontab::purge: false
timezone::timezone: Europe/Berlin

profile::sysctl::entries:
  kern.init_shutdown_timeout:
    value: 900
  security.bsd.see_other_uids:
    value: 0
  security.bsd.see_other_gids:
    value: 0
  net.inet.icmp.icmplim:
    value: 0
  net.inet.tcp.fast_finwait2_recycle:
    value: 1

rcconf::config:
  syslogd_flags: "-ss"
  newsyslog_flags: "-CN -f /root/etc/newsyslog.conf"

profile::file::entries:
  /var/coredumps:
    ensure: "directory"
    path: "/var/coredumps"
    group: 0
    owner: 0
    mode: "0777"

profile::package::entries:
  sysutils/genisoimage:
    ensure: "present"
  tmux:
    ensure: "present"
  git:
    ensure: "present"
  mc:
    ensure: "present"
  ca_root_nss:
    ensure: "latest"
  cpu-microcode:
    ensure: "latest"

jail1:
  crontab::purge: true

  crontab::crontab_entries:
    "update_stats.sh":
      command: |
        /usr/bin/lockf -s -t0 /tmp/update_stats.lock timeout 50 /root/api/update_stats.sh > /dev/null 2>&1
      user: "root"
      minute: '*'
      hour: '*'
      weekday: '*'

  sudo::configs:
    "wheelgroup":
      "content": "%wheel ALL=(ALL) NOPASSWD: ALL"
      "priority": 10
    "oleg":
      "content": "oleg ALL=(ALL) NOPASSWD: ALL"
      "priority": 10
    "gitlab-runner":
      "content": "gitlab-runner ALL=(ALL) NOPASSWD: ALL"
      "priority": 10

  profiles::db::postgresql::db_configs:
  'listen_addresses':
    value: '127.0.0.1'
  'bgwriter_delay':
    value: '500ms'
  'checkpoint_completion_target':
    value: '0.9'
  profiles::db::postgresql::databases:
    listmonk: {}

jail2:
  timezone: UTC

  accounts::user_list:
    oleg:
      purge_sshkeys: true
      group: wheel
      password: "password"
      shell: /bin/csh
      sshkeys:
        - "pubkey1"

jail3:
  timezone: UTC

jail4:
  timezone: UTC
  sudo::purge: true
  sudo::config_file_replace: true

  nginx::service_ensure: running
  nginx::service_enable: true
  nginx::events_use: kqueue
  nginx::confd_purge: true
  nginx::server_purge: true
  nginx::daemon_user: www
  nginx::nginx_servers:
    server_name:
     - "mirror.bsdstore.ru"
     ipv6_enable: true
     ipv6_listen_options: ''
     autoindex: 'on'
     ssl_redirect: false
     ssl: false
     use_default_location: false
     www_root: '/usr/local/www/cbsd-mirror'

... The main strength of this solution is that you can reconfigure (instead of 'static' templates with lots of 'sed' or static configs) parameters at any time and apply any modules even in the same container. Also several images are generated automatically by CBSD CI/CD infrastucture, not so much, but it's enough as an example.
 
If you want small-size bare-metal, you have to look for a provider offering that. I'm paying 12$ monthly for dedicated hardware.
 
Hello, don't forget AppJail. Take a look at: https://github.com/DtxdF/AppJail
Hi, thanks for your advice, now i did look into the Appjail:
So... my plan (in theory) of using Appjail is:
1) Create an application "bundle" (a bunch of appjail images).
2) Host this application bundle, somewhere where only you will be able to access it using terraform
2.1) to know state of application inside of jails use "healthcheckers" https://appjail.readthedocs.io/en/latest/healthcheckers/
3) Create EC2 AMI that would have Appjail preinstalled.
4) Call terraform apply, which should build EC2 instances out of created previously AMI, using terraform `remote-exec` will call `Appjail directory` to spin up jails with needed applications.

The problems in theory:
1) Terraform does not keep track of "remote-exec" (95% sure that is the case, thought i was not able to find much info about that)
but you can look into the https://spacelift.io/blog/terraform-provisioners#why-provisioners-should-be-a-last-resort
2) We would need to build somekind of monitoring solution, that would gather info from the "appjail healthcheckers", and make some kind of pretty graphs.

From my few experience with "appjail":
+ very fast to get your first jail and its image for the first time.
+ There is community interest in appjail - https://github.com/DtxdF/AppJail (100 stars)
- There is no community interest in appjail directory (for now?) - https://github.com/DtxdF/director (8 stars)
- documentation is overall good, but only for the `appjail` part, thought feels more of like a documentation for those who took a part in development of the tool. ( felt not very user friendly. )
documentation for the "appjail director" is uh, non existend (except the README in github)?... (15.02.2024)
* where i can read about what each parameter for each command does?
* guide https://forums.freebsd.org/threads/install-wordpress-with-appjail-director.90076/ , does not work out of the box for some reason.
- Veeery young project, first release oct 6 2022 https://github.com/DtxdF/AppJail/releases?page=3 .
- Error handling is very bad (for the appjail-director).
* Why would you print out something like "Creating web (<somekindofid>) ... FAIL!" but not with the reason of its "FAIL" at the same place? i can of course go to the `~/.director/logs`, and find the reason of the "FAIL" myself... but why not print it out in first place... yes im lazy to print logs myself.

In the end, if would say that the project has potential, but:
1) For some applications, companies, individuals - orchestration is a must have, as they might have multiple envs/customers/applications, and because of that they might have a lot of different instances that they need to manage, and doing so manually, even with the help of "appjail director" will be hard.
"CRI" support or its own orchestration solution is needed to do so in a sane way.
2) Better documentation, with more of **working** examples, for the easier understanding of how to use all options that appjail and appjail-director provides.
 
Joann

Hi, thanks for this feedback, I needed it to improve some aspects of the project.

1) Create an application "bundle" (a bunch of appjail images).
Remember that an image is optional. I think such a feature requires an article to explain why AppJail uses images in some of its Makejails, but you can use a Makejail as a provisioning script with instructions that some users feel similar to Dockerfile. Of course, a Makejail is optional, you can use AppJail just like in the great days when ezjail was used to use jailed apps, i.e. create a jail using `appjail quick` and keep it as a pet.

- documentation is overall good, but only for the `appjail` part, thought feels more of like a documentation for those who took a part in development of the tool. ( felt not very user friendly. )
documentation for the "appjail director" is uh, non existend (except the README in github)?... (15.02.2024)
Thanks for this, I will try to improve them.

- Error handling is very bad (for the appjail-director).
* Why would you print out something like "Creating web (<somekindofid>) ... FAIL!" but not with the reason of its "FAIL" at the same place? i can of course go to the `~/.director/logs`, and find the reason of the "FAIL" myself... but why not print it out in first place... yes im lazy to print logs myself.

Yes, this is the intention. Since its purpose is not to create a single jail, but many, the console might not be the best place to put the output, so I use a log directory. You can see the latest log directory using `appjail-director info` for example.

* where i can read about what each parameter for each command does?

Using `appjail help` and `appjail help <cmd>`.


Look at the version. The current version is `0.8.0` and in the ports tree is `0.6.1` for director. On the other hand, in AppJail, the current version is `3.2.0` and in the ports tree is `3.1.0`. AppJail and Director must be synchronized to work correctly. I submitted patches for both [1][2] but they are not reviewed yet.

Although I think if you need help or want to give more feedback, AppJail's Telegram community is the best place, or if you don't have Telegram, you can email me.

[1] https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=276578
[2] https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=275862

1) For some applications, companies, individuals - orchestration is a must have, as they might have multiple envs/customers/applications, and because of that they might have a lot of different instances that they need to manage, and doing so manually, even with the help of "appjail director" will be hard.
"CRI" support or its own orchestration solution is needed to do so in a sane way.
2) Better documentation, with more of **working** examples, for the easier understanding of how to use all options that appjail and appjail-director provides.
Thank you very much for this, I will take it into account to improve the project.
 
Last edited:
Thank you very much for taking part in development of free software and in this discussion. Now that you have explained your motives on error handling, now it makes sense, as well as why the guide mentioned above did not work out of the box (for some reason i totally forgot about checking the version;P). Thank you once again.
 
Back
Top