general/other Anybody running FreeBSD as guest on their laptop? Which minimal Linux hypervisor setup would you recommend?

Hi,

I am considering running FreeBSD as my "main" OS on my laptop.
I understand hardware support will be the big pain point (as I am not ready to by a new laptop yet) and I was thinking about avoiding it for now in this way:

- I would run minimal a Linux hypervisor (minimal in the sense I wouldn't modify or interact with it much)
- I would run FreeBSD as a guest

I am aware those are more Linux questions but ironically I might get the best insights here :sssh:

My questions are:
- Do any of you do that? if not what are the blocking points?
- What Linux distribution would you run for the hypervisor? (I am thinking Alpine or NixOS as I can even do an immutable, minimal run-from-RAM system)
- In regards to the file system I would of course like to leverage ZFS, what would you recommend for disk partitioning and management?
- For networking and wifi as I would like to keep the hypervisor interaction minimal I am thinking about just running an OpenWRT VM since it lightweight (I already did that on some servers)

Now maybe the main blocking point:
I ideally would like to keep an acceptable experience/performance for the graphical interface, meaning web browsing comfortably and the occasional FHD or 2K videos (no gaming)
FYI I do not have any fancy GPU, just a basic Intel UHD card.
- What would recommend there? (SPICE, VNC, Xorg, Wayland, etc.)

Thank you so much for your insights ?
 
I have a FreeBSD VM in kvm/qemu on Debian. Works fine, except network performance is less than a FreeBSD/FreeBSD bhyve VM.

I don't do graphics in there.

What laptop do you have? Maybe it won't be full of pain.
 
Recently, I started to move around a lot, and frequently rely on my Framework 16 notebook which runs a FreeBSD VM under kvm on Ubuntu. I prefer Debian, but chose Ubuntu because it was known to work well on the hardware. I let Linux run the desktop. It's gnome, but could be anything.

I want reliable, recoverable, maintainable systems. I use rsync to keep copies of my home directory on multiple systems. I also use a large USB thumb drive to back up my home directory when I'm travelling with the notebook.

There are two WD Black NVMe SSDs in the notebook. They are partitioned identically -- a 500MB EFI patition, and a 2TB Linux RAID partition. The 2TB partitions are mirror'd with md. The EFI partitions are not mirror'd (it's contrary to the fundamental design of EFI), but both are mounted at boot and Ubuntu knows how to keep them in sync (so they are both exact copies of each other and either can be used to boot). The 2TB md mirror is under control of lvm, on which are built XFS file systems (a 200GB root and the rest for VMs). So an XFS file system underlies ZFS used on the FreeBSD VM.

The only gripe I have with kvm is with the networking. The network runs on WiFi. Bridging WiFi is a challenge. It used to work seamlessly on my old Windows 8 notebook where I used VirtualBox to manage both FreeBSD and Linux VMs. But kvm under Linux uses iptables and NAT to manage the network for kvm clients when the hypervisor's network is WiFi. VM clients run on a private network. They can connect outwards (e.g. to the Internet) using NAT without issues. But I have to configure the kvm server as an ssh jump server to access the VM clients from anywhere else in the network. That works for me, but it's certainly not seamless.
 
The only gripe I have with kvm is with the networking. The network runs on WiFi. Bridging WiFi is a challenge. It used to work seamlessly on my old Windows 8 notebook where I used VirtualBox to manage both FreeBSD and Linux VMs. But kvm under Linux uses iptables and NAT to manage the network for kvm clients when the hypervisor's network is WiFi. VM clients run on a private network. They can connect outwards (e.g. to the Internet) using NAT without issues. But I have to configure the kvm server as an ssh jump server to access the VM clients from anywhere else in the network. That works for me, but it's certainly not seamless.

I deal with that on FreeBSD just last week. Another NAT gap in my network :mad:

It's a pretty big disadvantage of WiFi. You'd think that virtualization was far enough along that multiple-MAC address use should have been considered.
 
I have a FreeBSD 14 VM running on my work laptop as a VBox guest on an HP 840G9 running W11. It works well enough. Though, I do prefer FreeBSD running on bare metal, on my own HP 840G5.
 
What laptop do you have? Maybe it won't be full of pain.
Unfortunately I looked up on bsd-hardware.info and it is no good ...

I did the whole "trying to buy the right laptop for Linux" and hack it to make everything work 20+ years ago but looking back I wish I would have spent my time on other parts of the system.
Looking at the amazing progresses the industry has made in virtualisation on consumer grade hardware I feel blessed to be able to switch OS and hopefully avoiding the hardware support problems.

The only gripe I have with kvm is with the networking. The network runs on WiFi. Bridging WiFi is a challenge. It used to work seamlessly on my old Windows 8 notebook where I used VirtualBox to manage both FreeBSD and Linux VMs. But kvm under Linux uses iptables and NAT to manage the network for kvm clients when the hypervisor's network is WiFi. VM clients run on a private network. They can connect outwards (e.g. to the Internet) using NAT without issues. But I have to configure the kvm server as an ssh jump server to access the VM clients from anywhere else in the network. That works for me, but it's certainly not seamless.

But isn't this a reason to just spin an OpenWrt VM (or anything like it that support the wifi adatpter), passthrough all network components and just let it manage it?
I did this already on a server but without investigating too much the performance (but didn't felt any drop either).
 
But isn't this a reason to just spin an OpenWrt VM (or anything like it that support the wifi adatpter), passthrough all network components and just let it manage it?
I did this already on a server but without investigating too much the performance (but didn't felt any drop either).
I have no experience of that approach, but I would be quite interested in hearing what might be achieved with an OpenWRT VM in this context.
 
- What Linux distribution would you run for the hypervisor? (I am thinking Alpine or NixOS as I can even do an immutable, minimal run-from-RAM system)

I'd use Fedora Workstation, but I'd recommend whatever mainstream desktop-oriented distro and tech you're interested in. Alpine or NixOS look like they'd be fine though!

I like both of those Linux distros for having a unique way of doing stuff (notably different from mainstream distros), see Alpine used with postmarketOS, but another distro I'd like to mention is Void; I like it for starring musl. I never tried any of those 3 distros personally though.
 
But isn't this a reason to just spin an OpenWrt VM (or anything like it that support the wifi adatpter), passthrough all network components and just let it manage it?
I basically did this by having an old NETGEAR router with OpenWRT, bridging it wirelessly to my main AX router, and connecting it to my FreeBSD laptop via short Ethernet cable; although I guess it wasn't a VM and a whole separate bare-metal device :p
 
I have no experience of that approach, but I would be quite interested in hearing what might be achieved with an OpenWRT VM in this context.
I have done that sometimes since:
- It has usually all the wifi drivers you might need (useful on a FreeBSD host),
- On a system where I am not fully confident on how to configure the network well, it kind of prevent me from creating a big security hole (as I passthru all network adapters to OpenWRT),
- It is very lightweight, I believe you can give it 128 or 256MB of RAM and it will be fine.

But I guess it is far from ideal from a performance point of view...
 
What about wifibox? I used that with an Intel AC 9560 card on 14.1.

If I understand right it's basically the same concept (runs a light distro inside bhyve VM, passes the wifi card to it, uses Linux wifi drivers, and presents the network interface to FreeBSD). wifibox is a pre-packaged easy set-up solution for FreeBSD that handles the bhyve VM config/dependency/service stuff.
 
Personally, I have used VirtualBox for years, and had only minor issues with any OS. I've been seriously eyeing the Framework laptops, and I'd be interested in knowing more about them (than just the web site). Here's something you can try on your FW laptop. Get a 64 or preferably 128 GB USB thumb drive, install FreeBSD on that and boot from it. Does your wifi work correctly? All your other periphs? If nothing else, you have created a bootable maintenance disk for FreeBSD. That keeps you from having to wipe your Linux, and start over, and it will give you some more knowledge about your hardware. I'm curious. Did you accept the default hardware from them, or ask for something specific or order your own? Not trying to be nosy, as I said, I've been eyeing that machine for a while. (My suspicion is that it will run all the BSDs with little to no hardware issues, if you make sure you have a compatible wifi NIC).
 
Just to re-emphasise my original point regarding kvm, VMs, and WiFi.

I'm going to use IPV4 examples because it's what I (mostly) use.

There are no issues with briging VM clients to the kvm server's NIC if you have a copper Ethernet cable connecting your kvm server to the network. Assume it's 192.168.1.0/24. Everything can live on 192.168.1.0/24. You can configure the IP address of your kvm server and VMs in the usual way. Static IP or DHCP both work as they would for any other host on 192.168.1.0/24.

You can not bridge a WiFi adapter to anything. So kvm places the VMs on a separate private network. Your VM goes onto a virtual adapter on, for example, the 192.16.8.0/24 network. That network is accessible from your kvm server and one or more VM(s) local to the physical kvm server. For your VM client(s) to escape onto 192.168.1.0/24 the kvm server uses iptables to implement NAT for the VM(s). I resent this because I want iptables on the kvm server for other things (like a proper firewall).

Hosts on 192.168.1.0/24, other than the kvm server, can not see 192.168.8.0/24. So you have to make it visible. You could have the kvm server advertise a route to 192.16.8.0/24. I chose to configure ssh (via /etc/ssh/ssh_config) to jump to hosts on 192.16.8.0/24 via the kvm server. This only works for ssh connections (which is all I need).

My point is that a WiFi adapter on a kvm server does NOT behave like a wired Ethernet adapter. A VM will never be on the same external network as the kvm server.

None of this precludes the use of an OpenWRT VM to handle networking for your kvm server, but your experience with wired networks is not necessarily going to be a complete guide. I guess that, at a minimum, some extra routing would be required.

Notes:
  1. I really like the concept of letting OpenWRT handle the network connection to the real world. Though I wonder about chicken-or-egg problems with the network as you boot the physical server without any network access before the OpenWRT VM starts.
  2. VirtualBox under Windows can bridge WiFi adapters, and VMs can live on the same network as the hypervisor (well, they did on Windows 8.1). If VirtualBox can do it, so should kvm! It pains me to admit that Windows is better than Linux in this regard.
 
Back
Top