Solved FreeBSD as (xen?) hypervisor on a USB thumbdrive

Hi Forum. This thread here was prompted by this initial discussion in #general-hardware. Having failed to get any reasonable performance running FreeBSD off a USB thumb drive I've pondered moving root (/ - I guess via vfs.root.mountfrom) to a fast NVMe. At which point dawned on me what people had been talking about when they'd mention smth like you could run a hypervisor from USB drive with little if any writes and have all your guest VMs on a fast SSD or smth. I'd no idea what they meant until I tried that trick with having root on a separate disk. This after all is kind of like that but with a single guest FreeBSD "VM" that takes up an entire partition (maybe more than one).

I have no immediate need to run multiple VMs, but the idea is no longer whimsical and I can see some use. Cursory look at FreeBSD as host hypervisor kinda suggests that it may turn out to be its own can of worms at best or a gorilla with its own jungle to follow at worst. Do people run such setups?

Essentially (at least atm) I only need to run one FreeBSD guest that'd simply consume all resources.

It seems to me (pls forgive and correct if that impression is false) that bhyve isn't ready for prime time and I'm not sure what the benefit there would be.

Else FreeBSD xen dom0. That seems to be lacking in info/docs but I guess should be possible.

Any experience, pointers, opinions on any of the above? Should I not bother with hypervisors at all or should I be checking out competition e.g. KVM or whatever the hypervisor du jour?

Thank you
 
It seems to me (pls forgive and correct if that impression is false) that bhyve isn't ready for prime time and I'm not sure what the benefit there would be.
I think that impression is false. There are already lots of people using it successfully in production environments. Sure it doesn't have a lot of the fancy features of VMWare's vSphere or Citrix's Xenserver but you don't always need them and there are ways around some of those limitations. Besides that, development is continuously ongoing and new features are added all the time. You should definitely have a try.

Else FreeBSD xen dom0. That seems to be lacking in info/docs but I guess should be possible.
Yes, and no. Yes, there's been some considerable effort to get this working but there are a lot of caveats and if you're having problems with bhyve(8) then you should definitely skip this.

 
I am not quite sure of your requirements, your mention of Xen dom0 suggests they are substantial.

However for user-friendly GUI / workstation class virtualization remember we also have VirtualBox in ports.
 
I am not quite sure of your requirements, your mention of Xen dom0 suggests they are substantial.

However for user-friendly GUI / workstation class virtualization remember we also have VirtualBox in ports.
This runs on Dell PowerEdge R720 server that I only ever access via ssh. All tasks are computationally intensive. I'd only ever use VB on my laptop or smth tbh
 
All tasks are computationally intensive.
Then you probably don't want to use VMs at all and simply run it on the "bare" OS, taking full advantage of all available cores. VMs are useful to "compartmentalize" various different systems on a single hardware host. But this assumes that most of time an application is idling (a typical "office" server doesn't do much for 90% of the time and only has a few "peak" usage patterns). The host has to schedule cores and assign them to each individual VM. Using a lot of cores on a VM is typically bad for the overall performance because the "wait time" for all those cores to become available increases. The more VMs you run the more problematic this scheduling becomes. So you often don't want to assign more than 4 cores to a VM.

Just stick a couple of reasonably affordable 10K or 15K drives in it, install FreeBSD on the disks and work from there. I'm pretty sure you're not going to need the extra IOPS an NVMe will provide.
 
SirDice You are probably right. However, I need those 8 SAS drives as "scratch" space so I've been trying to avoid putting OS there.

Point re scheduling cores for VMs is well received. Thank you. This is the kind of insight from the tranches I've hoped for when I posted the question. I've no immediate need to "separate" but for that I may want to learn a bit about jails. Bit overwhelmed as it is :)
 
However, I need those 8 SAS drives as "scratch" space so I've been trying to avoid putting OS there.
A good reason to use ZFS here. That way the OS and the data can share the storage capacity while keeping them separated on different datasets (filesystems). The OS itself isn't going to take up much space so there's plenty left over for the "scratch" data.
 
A good reason to use ZFS here. That way the OS and the data can share the storage capacity while keeping them separated on different datasets (filesystems). The OS itself isn't going to take up much space so there's plenty left over for the "scratch" data.
yeah, I mostly understand the idea of "stretching multiple partitions" with ZFS now. My usecase however assumes those drives can be hot-swapped any moment, so I can't put an OS there even on a separate partition. Initially I did have os on part of the first drive and your proposed ZFS solution would've done the trick but hot-swap requirement changes that. Why do you think I started that dance with USB sticks etc :)
 
My usecase however assumes those drives can be hot-swapped any moment, so I can't put an OS there even on a separate partition.
Unless the data is mirrored? But I think you mentioned RAID0 in another thread. Or at least the ability to remove the drives with the data on it. That would be problematic if the OS is on there, yes.
 
Briefly I have "worker" drives and "target" drives. Workers may benefit from being striped like RAID0 setup or something similar with ZFS. I want to them at their highest speed and since I also use SSDs having them as one big drive may actually help me reclaim some of the space lost due to write chunks not quite matching SSD capacity.

Target drives just keep data the workers produce and these drives maybe removed and placed in "cold" storage like JBOD etc.

Particular setup isn't really related to hypervisor stuff tbh. My curiosity was piqued by similarity of moving "mountroot" around while booting off USB thumb drive and booting into hypervisor with guest VMs sitting on fast drives. But for my case I admit I don't see much benefit from going full on hypervisor + guest VMs. That's why I asked the original question.
 
Just a FYI. Xen requires devel/libvirt. Libvirt's default options are set for bhyve.
So for Xen you must compile libvirt from ports and set Xen as an option for the port.

I like to use 'Disk on Module' for the OS and NVMe in RAID1 for hosting VM's.
Currently using Innodisk 64GB that plugs right in the SATA connector and a small 5V power connector.
That's what I use on my storage servers. GEOM RAID1 DOM. Then ZFS for the 24 bays of storage.
I like to decouple the storage from the OS.
 
Just a FYI. Xen requires devel/libvirt. Libvirt's default options are set for bhyve.
So for Xen you must compile libvirt from ports and set Xen as an option for the port.
noted. Thank you!

Re DOMs. I learnt about them earlier and they would've been great solution with those two empty SATA ports idling on the MB. Sadly, they're ridiculously overpriced for what they are. (SATA 2 speed + expensive) VS (lost PCIe slot + cheap yet fast PCIe M.2 SSD). For now I settled on the latter. Had I a cheap DOM on hand I would've gone with that.

Anything of interest or a summary of your experience running xen there?
 
Anything of interest or a summary of your experience running xen there?
Never tried and I'd always suggest to first identify which features of XEN you really need that bhyve doesn't offer. If the answer is none, go with bhyve. I'm using it for years for a variety of VMs (running FreeBSD, Linux and Windows), storage provided by sparse zvol datasets and sysutils/vm-bhyve for simple management on the console.
 
Back
Top