Console-only safe machine: jail vs. vm?

What's the better option to separate operating system from service with FreeBSD, jails or VMs?
Relevant criteria for my use case:
  • While being a Linux veteran, I have pretty little experience with FreeBSD.
  • All software components need to be upgradable.
  • Upgrades must not risk data loss.
  • The machine must be safe against attacks from a malware-infested LAN, and from directed attacks by anybody who does not have a zero-day exploit.
    To facilitate this, I plan to use a two-layer approach:
    1. The base layer can only be managed using a physical console and never accepts any network connects.
    2. The services layer contains services that may accept network connects (but usually do not).
I see two possibilities:
  1. Use FreeBSD and bhyve for the base layer, run the services in VMs.
  2. Use FreeBSD and ezjail for the base layer, run the services in jails.
Advantages and disadvantages? Third options (if they offer substantial advantages)?
 
I'm not sure that security is the best way to pick which type of virtualization to use, as each could potentially have exploits that the other lacks, or they could both share a common exploit. With the ever increasing number of hardware exploits, I'd be far more concerned about what hardware I'm running on than what I'm using for virtualization.

Anyway, I would instead use this criteria to pick between jails and bhyve:
  1. Do you have to run a different kernel or OS? If so, your only option is bhyve.
  2. Are you constrained by resources, or want the best performance possible? If so, use jails.
  3. Do you just want to isolate processes from one another? While it's possible to use bhyve, you'll likely do much better (resource and convenience-wise) with thin jails.
  4. Still not sure? You can try both, and see which fits your needs.
Basically, I recommend using jails unless you have a need that mandates using bhyve. If memory serves, the Linux equivalent of jails is LXC, while byhve is comparable to Linux's KVM, if that helps you.
 
I like the Bhyve approach.
Not sure how you would handle any updates/pkg upgrades to bhyve host though, with no ethernet interfaces assigned.
You could temporarily assign an interface to set it all up and them divert the interfaces to the bhyve guests.
But what about maintenance. It is also convenient to ssh into my vm host to check hypervisor stats.

I start my VM's using /etc/rc.local scripting and fire up each VM with tmux to shift between VM's.
So I can tmux attach and manage my VM's. I used to use nmdm devices to reach VM's locally.

I like the convenience of VM's and I also use a i386 VM for my 32 bit needs.
That one I don't use Bhyve UEFI so it takes 2 lines in my script to fire it up.

My question is how many 'containers' do you need. Over 4-5 you might want to use a bhyve helper program.

Hardware is another issue. It helps to have a LGA2011 for many VM's and core count if needed.
Supermicro Dual CPU 2650LV3 is what I use on my VM box. That gives me 48 cores and I have 64GB RAM installed.
So I have a VM with alot of cores dedicated for complilng FreeBSD.
Others include poudriere VM for software and a Monit VM for system monitoring.

Storage passed through to the VM does take a pretty big hit. I host my VM's on a pair of g-mirrored Samsung PM951 nvme.
Bare metal they provide 1000MB/s but in VM they show up as emulated Bhyve SATA and half the speed of host.
 
Not sure how you would handle any updates/pkg upgrades to bhyve host though, with no ethernet interfaces assigned.

Ah sorry, I wasn't clear enough here.
The base layer does not accept incoming connections, but it is supposed to open outgoing connections.
It also needs to accept incoming connections on behalf of the services, just not act on that data. (I am undecided whether it should be doing packet filtering - on one hand, this adds an extra layer of protection, on the other hand, the packet filter itself could be vulnerable. On the third hand, the base layer needs to look at the packets to route them to the correct service. I haven't decided anything here yet.)

But what about maintenance. It is also convenient to ssh into my vm host to check hypervisor stats.

Yes, it would be highly convenient, but it won't cover the security scenario:
If the machine that I use to ssh into the box is compromised, the attacker will install a keylogger on the ssh command. I'd expect that to be part of a standard attack script nowadays, since ssh is in such widespread use and attackers have begun targeting non-Windows machines.

My question is how many 'containers' do you need. Over 4-5 you might want to use a bhyve helper program.

Not many:
  1. Backup. One service per data group (we currently have two), all set up roughly in the same fashion:
    1. Linux backup. rsyncs into the Linux machines in the LANs and pulls in deltas.
    2. Windows backup, if I can't set up rsync daemons on the Windows machines.
    3. A cron job that cleans out old backups.
  2. Something that serves the backups back to the LAN machines. Either Samba, or a small HTTPS server. (Probably the latter. Samba has an awful security track record. OTOH HTTPS cannot easily serve entire directories in a restore.) Anyway, this service does not get write access to the backups, that's exclusively for the backup process.
  3. An HTTPS page for monitoring.
I believe that's it.

Hardware is another issue.

It's some amd64 CPU with the RAM and HDDs to go.
There aren't many changes per day, so the hardware is likely overpowered anyway :)

Others include poudriere VM for software and a Monit VM for system monitoring.

I understand that poudriere is a jail-based package build system.
How would this factor into the situation I have - does it make sense to pursue it (apart from generally getting more knowledgeable, which is always good, of course)?

Storage passed through to the VM does take a pretty big hit. I host my VM's on a pair of g-mirrored Samsung PM951 nvme.
Bare metal they provide 1000MB/s but in VM they show up as emulated Bhyve SATA and half the speed of host.

Hm. I guess your machine would be idling in my network then: My use case has data essentially coming over gigabit ethernet, which translates to a maximum data rate of roughly 100 MB/s, which I believe the box should handle (and if it doesn't, ah well, long-term security and reliability trumps performance in this case).
Full disclosure: I have a Gen8 Proliant server. I believe it's widely considered reasonable.
 
Back
Top