Can the latest FreeBSD work as Xen Dom0 ?

Dear all, at the moment I use one NetBSD as a Xen Dom0, but NetBSD do not support some popular hardwares. Does anyone know if the latest release of FreeBSD support Xen ( means work as a Xen Dom0 ) ? If not,when... ? Thanks.
 
Dom0 support has never been a part of FreeBSD, and I do not believe there are any plans to add it.

For the best Xen support, you need to be running a Linux-based Dom0.

For any kind of recent hardware, though, I'd recommend Linux-KVM over Xen.
 
Use Virtualbox instead of Xen anyway. It's a faster and the new Virtualbox 4 will have more features than are availible on Xen or KVM. VBox coupled with ZFS make a great VM host.
 
Thanks both, I submit this question here because I notice that http://wiki.freebsd.org/FreeBSD/Xen has been renewed, the old page does said "The port will only run as a guest (ie. domU) right now, on i386/PAE platforms.", but the new one do not notice this issue.

Even KVM has became a part of Linux kernel, most people said Xen have better performance than KVM. I'd like to try a Linux-based Xen Dom0.

Virtualbox is a good solution for enterprise, is it free for personal research ?
 
In theory, Xen should be faster, especially with a fully para-virtualised setup.

However, Xen is many, *many*, *MANY* times harder to configure than KVM, and many times harder to manage than KVM, and trying to use different OSes (or even different versions of Linux) will drive you to drinking on the job.

Xen is just annoying, and should be avoided. It's going the way of the dodo, anyway, with every major Linux distro abandoning Dom0 support.

KVM on the host is the way to go. Just use a slimmed down Linux with the latest KVM, and install whatever you want into the VMs. Any OS that can be installed onto a P3 system can be installed in a KVM virtual machine.

If you want to use FreeBSD technologies, then setup a separate FreeBSD+ZFS box, exporting ZFS filesystems via NFS, and use root-on-NFS in your VMs (network booting).
 
While I'm in total agreement that Xen is an absolute nightmare to work as it's documentation is abysmal. If you want an example of the pain, try finding all different valid models you use of virtual nics in cfg file eg vif = ['model=???'] in xen documentation. Or timer_mode=2 which I've had to use. Xen is full wild goose chases like that.

I'd really be interested in your theory about why a paravirtualized Xen guest would be faster than a similarly configured KVM guest or a Virtualbox one. First the performance between a Xen paravirt and full virt guest is actually very similar so that won't likely make much difference. I'm guessing you're subscribing to the myth of type 1 hypervisor vs a type 2. If that's true, I encourage you to reevaluate your assumptions. Modern hypervisors of either type have essentially the level of abstraction for their guests. The type 1 hypervisor in Xen is a microkernel design which passes a large amount of control off to Dom0 including critical performance factors like IO and network cards. The type 2 hypervisior like KVM or Virtualbox loads an OS's standard macro-kernel with a host level hypervisor uses it to control all the info between guest and host hardware. In either scenario, the guests are allowed to execute code directly on physical cpu with the exception of ring 0 operations. You can find more information about this:

http://blog.codemonkey.ws/2007/10/myth-of-type-i-and-type-ii-hypervisors.html
http://mrpointy.wordpress.com/2009/05/12/is-kvm-a-type-1-or-a-type-2/
http://twit.tv/floss130

Finally, it's really disappointing to view this site and see a moderator of the official FreeBSD forums advocating a Linux solution for something that FreeBSD can accomplish quite well. I've pointed out these technical and other reasons to you before on your KVM advocacy posts. Please note I'm not disputing the validity or usefulness of KVM, it's a great solution. The problem I have is FreeBSD also offers a good one and people coming looking for one and you tell them to use Linux. Please help the FreeBSD community gain and maintain users so that our virtualation options can grow and strengthen.
 
phoenix said:
Note how I said "in theory". :)
Yeah I wasn't trying to insult you or anything, Xen might even be more be faster in certain areas. My condensed point is that across the hypervisors I've used(vbox, xen, kvm) performance is roughly equal. Each has things they better at. I think in general someone evaluating a hypervisor shouldn't huge amount of weight on performance. The line between type 1 and 2 is much blurrier than it was 4 years ago.

I will also add the one area where I think Virtualbox is still behind the others is in networking speed. Virtio on KVM is outperforming Virtualbox's implementation by something around 30%, although Virtualbox's latency is a bit better. Maybe that's because it isn't handling the volume KVM is though.

Virtualbox 4 has add a ton of new features. One of my favorites makes cpu and io bandwidth partitioning much easier than methods traditionally used on Xen/KVM systems. Hopefully it stabilizes and hits the ports tree soon. I can't wait to run some tests with it. Supposedly the GUI is more than marginally useful now too. It's not turned on yet, but something like Vbox 4.1 should have the ability to pci pass-through as well.
 
FreeBSD ZFSoverNFS

This is a late reply, but this post comes up fairly high in Google when searching for FreeBSD + Xen so I'll add:

ZFS over NFS is not so great without a fairly decent ZFS setup. In my experiences you need SSDs for the ZIL or turning the ZIL off which is exactly what you don't want to do for VM accessed via NFS as that means no data integrity guarantees for the VMs. Your ZFS pools will not get corrupted but your VMs certainly can as you run the risk of loosing write sync's which doesn't work well for file systems that expect sync to only return when the data is safe on hard storage.

For instance, my original low end setup of 4GB of ram, raidz with SATA and no SSDs and ZIL active is painfully slow and can depending on the load turn a Windows virtualized boot into an hour long process over NFS if you try to start two at the same time. Write performance is absolutely unbearable.

Add SSDs for the ZIL or turn it off and things get pretty close to normal. You'll want more than 4GB of ram though.

Do not attempt this on anything less than 9.0-RELEASE (More specifically the v28 patches if you must use something older) as the performance will be unbearable regardless of setup in my experience, even with the ZIL off.
 
Back
Top