• This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn more.

FreeBSD as host for virtual machines

pez

Member

Thanks: 7
Messages: 32

#1
I'd like to use FreeBSD as the host operating system for my virtual environment.

I intend to host up to 10 virtual windows and FreeBSD machines, and the host machine has more than 4GB of RAM.

After a bit of a search, I thought I'd try amd64 and Virtualbox, but thought that before I jumped in, I'd see what others have tried and been satisfied with.

Does anyone have any recommendations for the virtualisation software and which version of FreeBSD to use?
 

phoenix

Administrator
Staff member
Administrator
Moderator

Thanks: 1,036
Messages: 3,824

#2
For pure performance, you'd be better of using a Linux distro with KVM. Especially if you have AMD CPUs, as all AMD CPUs come with hardware virtualisation support. With KVM, and virtio drivers for Windows and Linux, you'll get much better performance than running VirtualBox. Plus, you get nicer management tools, lighter resource usage on headless setups (ie use rdesktop to access Windows VMs).

It's certainly possible to use VirtualBox for this. However, I consider VB to be more of a desktop VM setup, and not a headless VM server setup.
 

Galactic_Dominator

Active Member

Thanks: 35
Messages: 196

#3
VirtualBox is really the only practical choice, qemu is too slow and jails won't run windows or give you the control over the VM's that you need.

AMD64 = yes given your ram usage.

VirtualBox is great provided your VM's aren't heavy on network traffic. They are fine on moderate load, but like most virtualization solutions do poorly in network traffic. Under ideal circumstances, I get around 28.6Mb/s on a Gb link. CPU load is also relatively high in the VM since NIC is emulated as well.

If you have 10 VM's with moderate sustained network traffic, I'm not sure a modern fast 4 core CPU like an i7 would be adequate. If all or some are just sporadically accessed then would probably be okay.
 

Galactic_Dominator

Active Member

Thanks: 35
Messages: 196

#4
phoenix said:
For pure performance, you'd be better of using a Linux distro with KVM. Especially if you have AMD CPUs, as all AMD CPUs come with hardware virtualisation support. With KVM, and virtio drivers for Windows and Linux, you'll get much better performance than running VirtualBox.
Speaking from experience, this is not correct. VBox blows the pants off of KVM, especially IO-wise and it's able to use the same virtio as well at least for network IO. Not too long ago I specifally migrated a company's VM host to Virtualbox due to KVM's slowness. This was demonstrated in live testing and average system load on the host when from around 3 to just above 1. Virtualbox uses VT extentions as well as well as guest additions which are able to provide important functionaly like page fusion for windows guests.

http://www.virtualbox.org/manual/ch10.html

VBox also has nice web management tools similar to KVM/Promox.

The one area KVM has an edge over Virtualbox is it's ability to present a reliable timer to guest. I had a heck of time getting Asterisk to run properly under Vbox. Asterisk/KVM didn't work with default setting either, but easier to get it working.

Both KVM and VirtualBox are level 2 hypervisors and given everything involved where ever you would consider using KVM, VBox is a viable alternative. Of course for a level 1 hypervisor FreeBSD isn't appropriate, but Vbox is certainly good enough for use beyond the desktop. It's stable, fast, and feature-full so in my book that's production-ready. The opinion of your management may vary.
 

vermaden

Son of Beastie

Thanks: 902
Messages: 2,578

#6
phoenix said:
For pure performance, you'd be better of using a Linux distro with KVM. Especially if you have AMD CPUs, as all AMD CPUs come with hardware virtualisation support. With KVM, and virtio drivers for Windows and Linux, you'll get much better performance than running VirtualBox. Plus, you get nicer management tools, lighter resource usage on headless setups (ie use rdesktop to access Windows VMs).

It's certainly possible to use VirtualBox for this. However, I consider VB to be more of a desktop VM setup, and not a headless VM server setup.
Do You have any benchmarks of those?

VBox also uses AMV-V (the hardware extension), VBoxManage with VBoxHeadless will launch machines without x11, propably the same as KVM, with VBoxManage -nologo list runningvms You can list running machines.

There is even web interface to manage all VBox machines remotely: http://code.google.com/p/phpvirtualbox/

Also, guest additions does the same as paravirtual drivers for KVM.

Do You recommend any specific Linux distribution for that job? (KVM server) RHEL/CentOS for example, or something less conservative?
 

phoenix

Administrator
Staff member
Administrator
Moderator

Thanks: 1,036
Messages: 3,824

#7
Galactic_Dominator said:
Speaking from experience, this is not correct. VBox blows the pants off of KVM, especially IO-wise and it's able to use the same virtio as well at least for network IO. Not too long ago I specifally migrated a company's VM host to Virtualbox due to KVM's slowness. This was demonstrated in live testing and average system load on the host when from around 3 to just above 1. Virtualbox uses VT extentions as well as well as guest additions which are able to provide important functionaly like page fusion for windows guests.
Which version of VirtualBox against which version of KVM?

Considering how horribly VBox 2.x ran on anything, and how poorly VBox 3.x runs on Windows and Linux, I have a hard time believing that it outperforms any recent versions of KVM (since the switch to 0.x.x versioning).

Yes, KVM in the KVM-72 to KVM-88 days wasn't all that great and had memory leaks in the virtio drivers. But 0.12.x runs a whole lot smoother, especially with virtion. And the vhost stuff coming with 0.13 will be even smoother.

VBox also has nice web management tools similar to KVM/Promox.
I haven't see any of those. In fact, I haven't seen anything in the way of management tools for VBox other than the client GUI. Pointers muchly appreciated.

Both KVM and VirtualBox are level 2 hypervisors and given everything involved where ever you would consider using KVM, VBox is a viable alternative. Of course for a level 1 hypervisor FreeBSD isn't appropriate, but Vbox is certainly good enough for use beyond the desktop. It's stable, fast, and feature-full so in my book that's production-ready. The opinion of your management may vary.
I just can't picture a VBox deployment where we have KVM. Considering the resource usage of running a pair of FreeBSD 8.0 VBox VMs on Windows (the Linux version wasn't much better), I'd hate to run it in place of the 12 KVM or Xen VMs we have on each of our VM hosts.

Maybe the latest release is better (3.0.something was the last I tried).

But, when 2100 Zimbra users can hit our Zimbra server for 8 hours a day, without impacting the school district website VM, or the various Windows XP/2003 VMs running on the same host, with only 4 2GHz Opteron cores (2000-series from several years ago) in use, I just can't see VBox being an option.
 

Ralph_Ellis

Member

Thanks: 5
Messages: 28

#8
I can understand using VirtualBox or KVM for the Windows installation but if you are running several FreeBSD virtual machines, would a better option be to use a series of jails to run the programs that you want to isolate?
The jails should allow you to keep the various environments separated.
I know from using OpenSolaris and Solaris that using zones and containers has a lot less overhead than installing a virtual machine.
 

Galactic_Dominator

Active Member

Thanks: 35
Messages: 196

#9
phoenix said:
Which version of VirtualBox against which version of KVM?
3.1.4 vs0.12 from Debian Lenny. VBox was binary version.
phoenix said:
Considering how horribly VBox 2.x ran on anything, and how poorly VBox 3.x runs on Windows and Linux, I have a hard time believing that it outperforms any recent versions of KVM (since the switch to 0.x.x versioning).
I wouldn't classify 2.x VBox's performace as horrible, just subpar in certain areas. In other areas like IO, VBox has always greatly outperformed KVM/qemu IME.
phoenix said:
I haven't see any of those. In fact, I haven't seen anything in the way of management tools for VBox other than the client GUI. Pointers muchly appreciated.
http://code.google.com/p/phpvirtualbox/
http://code.google.com/p/vboxremote/
There are other's as well, all still early in development. phpvirtualbox is the one I've used and it's a fine solution provided your needs aren't too great. I set it up to listen only on localhost, then ssh/port forward for my security ;) The normal Vbox GUI is not even remotely adequate for remote management capabilities. I normally use VBoxManage cli for managing stuff, and don't have much trouble usually. There are other frameworks out there for building more complex setups as well, and given the chance I would like to use them more. However from my purposes the cli + some simple scripting is more than adequate.
phoenix said:
I just can't picture a VBox deployment where we have KVM. Considering the resource usage of running a pair of FreeBSD 8.0 VBox VMs on Windows (the Linux version wasn't much better), I'd hate to run it in place of the 12 KVM or Xen VMs we have on each of our VM hosts.
I think VBox may be slightly heavier than KVM in terms of overhead, but not too much. I generally also like the additional features you get from VBox VM like the SAMBA share, VRDP, guest additions, etc. To me, it's easily worth the trade even if your VM density/memory allocation isn't quite as high. It depends on what your VM's are too because if you are running Win guests, page fusion is available. That will save you some RAM and performs better better than Linux KSM(VBox 3.1.4 has bug w/ page fusion and win64 guests).
phoenix said:
Maybe the latest release is better (3.0.something was the last I tried).
Maybe, 3.0 was when SMP support was introduced and various performance issues have been addressed since then. Currently, VBox cpu speed is advertised at near-native(given VT-x is present) and I wouldn't disagree.
phoenix said:
But, when 2100 Zimbra users can hit our Zimbra server for 8 hours a day, without impacting the school district website VM, or the various Windows XP/2003 VMs running on the same host, with only 4 2GHz Opteron cores (2000-series from several years ago) in use, I just can't see VBox being an option.
That's pretty impressive, I probably wouldn't have even attempted KVM or VBox on a Zimbra install of that size. I would have assumed there was a great deal of network traffic, and I would have use XEN w/ pci-passthrough. As far as the virtio stuff, disk IO under KVM seems to benifit greatly from it. Net IO though, there isn't too much difference between that and the main Intel emulated NIC. Nearly the same throughput and CPU load. Plus the virtio drivers are a PITA to install on newer versions of windows, Server 2008 specifically.
 

phoenix

Administrator
Staff member
Administrator
Moderator

Thanks: 1,036
Messages: 3,824

#10
Thanks for the links, I'll have to check them out. The one downside of all the OSS VM solutions is the lack of good, usable, VM management frameworks that don't require Linux on every piece of hardware (why run Linux on the storage boxes when there's ZFS available?) or X installed everywhere or some arcane XML configuration schema. Seems like everyone with large VM setups is still using home-grown shell scripts. :(

I generally also like the additional features you get from VBox VM like the SAMBA share, VRDP, guest additions, etc.
Guess it depends on what you are virtualising. None of that is useful for Linux/FreeBSD VMs running headless on a server, but are very useful for running VMs locally on client machines.

That's pretty impressive, I probably wouldn't have even attempted KVM or VBox on a Zimbra install of that size.
I know. It boggles the mind reading through the Zimbra forums about all the massively huge hardware setups people have for Zimbra installs with 1000-1500 accounts (2-3 separate mailbox servers, separate ldap server, proxy servers, etc; or massive VMWare cluster setups, etc). Either we're doing something *very* wrong on our setup, or we've figured out how to make Zimbra fly. :) Granted, we don't use Zimbra as an SMTP server, nor as an A/V, A/S server, (separate Postfix box for all that) but everything else is enabled in one VM, with 8 (sometimes 10) other VMs.

Esprvislly when you consider how horrible our current storage setup is (12x 400 GB SATA harddrives in a single RAID6 array, auto-carved into 2 TB LUNs, then stitched back together in the VM host using LVM, then carved up into logical volumes for each VM). :)

I would have assumed there was a great deal of network traffic
Nope, surprisingly little network traffic. Rarely above 10 Mbps, generally under 1 Mbps. [shrug] Either that, or my SNMP monitoring of network traffic is very wonky. :)

As far as the virtio stuff, disk IO under KVM seems to benifit greatly from it.
Only if using KVM versions newer than KVM-72. KVM-72 has a nasty memory leak that causes the VM host to run into swap and bog right down. Unfortunately, KVM-72 is what ships with Debian Lenny. And getting anything newer requires a lot of "upgrades" from backports. :) And there are a few other versions along the way to 0.12.x that have other nasty regressions in virtio. 0.12.x is running nicely, though.

Net IO though, there isn't too much difference between that and the main Intel emulated NIC. Nearly the same throughput and CPU load. Plus the virtio drivers are a PITA to install on newer versions of windows, Server 2008 specifically.
For our setup, using virtio net for Linux VMs has reduced the CPU usage on the host, especially for the 2 Windows XP VMs. I haven't tried to use the virtio block drivers in XP, nor the virtio net driver in Windows Server 2003. Would be interesting to try PCI passthrough, though, to see what difference it would make.

One thing I do miss from VBox, though, is the private VM-to-VM network setup. That's something I haven't found an easy way to replicate in KVM without pulling in lots of other not-always-faster software networking stuff.
 

Galactic_Dominator

Active Member

Thanks: 35
Messages: 196

#11
phoenix said:
Guess it depends on what you are virtualising. None of that is useful for Linux/FreeBSD VMs running headless on a server, but are very useful for running VMs locally on client machines.
IME experience is mostly the opposite. CIFS is just the most universal network fs, and using the shared folder functionality has made my ability to create centralized backup's much easier since it rsync's to the host(that part is kinda redundant) and from there tarsnap operates on the data and creates off-site incremental backups. In some circumstances, configuration settings are rolled via shared folders as well. Guest additions also is a more efficient(ntpd doesn't always work well in VM's, plus it's overhead) method of keeping the guest's clock in sync, a critical component to most of my setups. I also work on VM's that will have 20k+ open connections and once in a blue moon the networking stack with crash. It's nice to be able to VRDP in and find out what going on, what the ip is, etc.
phoenix said:
I know. It boggles the mind reading through the Zimbra forums about all the massively huge hardware setups people have for Zimbra installs with 1000-1500 accounts (2-3 separate mailbox servers, separate ldap server, proxy servers, etc; or massive VMWare cluster setups, etc).
The other forums user's experience with Zimbra is more in line with my own, but then perhaps I'm biased. I've never cared for it, and think it makes running a mail server as complex and heavy as running MS Exchange. Horde is actually my preference however I inherited the current Zimbra install.
phoenix said:
Esprvislly when you consider how horrible our current storage setup is (12x 400 GB SATA harddrives in a single RAID6 array, auto-carved into 2 TB LUNs, then stitched back together in the VM host using LVM, then carved up into logical volumes for each VM). :)
Throw some multipath in there for a good time :e I'm assuming GPT wasn't an option during the setup, so you have to do what you have to do.
phoenix said:
Nope, surprisingly little network traffic. Rarely above 10 Mbps, generally under 1 Mbps. [shrug] Either that, or my SNMP monitoring of network traffic is very wonky. :)
Yeah, that is much smaller than I would have anticipated. My Zimbra install has a much smaller user base(200) and we might average more traffic than that. I know our Windows Servers do quite a bit more, at least one of them.
phoenix said:
Unfortunately, KVM-72 is what ships with Debian Lenny.
The current one in the Lenny repository 0.12, no backports here. On a side note, I can never understand why so many like the repository style of pkg's. It gets stale so quickly working within those limitations sucks. I understand the principle of using known good binaries, but it's reasonably easy to create and maintain your own packages out of FreeBSD ports tree. You can script the whole thing and get the best of both worlds. Argh! I use zabbix for my monitoring/SNMP/alerting stuff and you have to use backports for that if you want a supported version. Many other useful utils like pigz aren't available and never will be in all likelihood.
phoenix said:
For our setup, using virtio net for Linux VMs has reduced the CPU usage on the host, especially for the 2 Windows XP VMs. I haven't tried to use the virtio block drivers in XP, nor the virtio net driver in Windows Server 2003. Would be interesting to try PCI passthrough, though, to see what difference it would make.
XEN 4.0/Squeeze is what I'm migrating too since both KVM and VBox aren't adequate network-wise for my VM's. Initial tests are extremely promising.
phoenix said:
One thing I do miss from VBox, though, is the private VM-to-VM network setup. That's something I haven't found an easy way to replicate in KVM without pulling in lots of other not-always-faster software networking stuff.
Maybe you're talking about the VDE stuff? Haven't used it in awhile, but yes VBox is nice for that too.

One other nice feature I failed to mention with a FreeBSD/VBox setup is the ability to use ZVOL's as the VM backing store. Coupled with ISCSI, HAST, and VBox's teleportation and you have a Big Boy VM infastruce. Sure there are limitations, but which method doesn't have flaws?
 

aragon

Daemon

Thanks: 272
Messages: 2,031

#12
Ralph_Ellis said:
if you are running several FreeBSD virtual machines, would a better option be to use a series of jails to run the programs that you want to isolate?
Probably. Keep both options at your disposal, but I for one would prefer to use jails in most cases. Just wish this could attract some talented dev to finish it. :)
 

Galactic_Dominator

Active Member

Thanks: 35
Messages: 196

#13
aragon said:
Probably. Keep both options at your disposal, but I for one would prefer to use jails in most cases. Just wish this could attract some talented dev to finish it. :)
Actually this is being developed and better:

http://wiki.freebsd.org/Hierarchical_Resource_Limits -- Funded by the FreeBSD Foundation

Anyway, jails are great I use them when I can, but they are not a replacement for a real hypervisor.
 

phoenix

Administrator
Staff member
Administrator
Moderator

Thanks: 1,036
Messages: 3,824

#14
<snip some excellent discussion on VBox, KVM, and Xen. Thanks!>

Galactic_Dominator said:
One other nice feature I failed to mention with a FreeBSD/VBox setup is the ability to use ZVOL's as the VM backing store. Coupled with ISCSI, HAST, and VBox's teleportation and you have a Big Boy VM infastruce. Sure there are limitations, but which method doesn't have flaws?
Yeah, that's something (ZFS, HAST, iSCSI) we're looking into for our next build-out. FreeBSD + ZFS + CARP + HAST + iSCSI for the storage layer. Debian or possibly Ubuntu Server for the management server, which will also be a diskless boot host. Debian Linux for the VM host nodes, using PXE to boot, NFS for /, then running the VMs with iSCSI block devices (no local storage, VM hosts become appliances).

The biggest pain is trying to find a VM mangement setup that doesn't cost a bundle, works with heterogenous OS layers, and allows us to mix and match KVM, Xen, VServer/OpenVZ as needed.
 

Galactic_Dominator

Active Member

Thanks: 35
Messages: 196

#15
phoenix said:
The biggest pain is trying to find a VM mangement setup that doesn't cost a bundle, works with heterogenous OS layers, and allows us to mix and match KVM, Xen, VServer/OpenVZ as needed.
Yes, it is a good discussion. I don't have much left to add except that I used ganeti for VM/Cluster managment. It doesn't have a GUI, nor container virtualization management capabilites. As you mentioned earlier, a lot of stuff is so linux centric it's ridiculous. 10 minutes in to the libvirt source, and I gave up thoughts of porting it. This back when kqemu was still semi-viable, kvm appeared coming, and there was talk of XEN Dom0 support as well. Anyways, you probably already know it, but if you're running Linux something based off of libvirt should do what you want.
 

vermaden

Son of Beastie

Thanks: 902
Messages: 2,578

#16
... about network performance with VirtualBox, You can use VirtIO networking with VirtualBox as well, with drivers from KVM project site [1], which will lead us to [2], here is what the VirtualBox manual says:

&quot said:
The "Paravirtualized network adapter (virtio-net)" is special. If you select this, then VirtualBox does not virtualize common networking hardware (that is supported by common guest operating systems out of the box). Instead, VirtualBox then expects a special software interface for virtualized environments to be provided by the guest, thus avoiding the complexity of emulating networking hardware and improving network performance. Starting with version 3.1, VirtualBox provides support for the industry-standard "virtio" networking drivers, which are part of the open-source KVM project.
[1] http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers
[2] http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/
 

bsdgooch

New Member

Thanks: 8
Messages: 18

#17
VPS on FreeBSD - Another Virtualization Option, Coming Soon

Also, as a heads-up (quoting the website):

VPS - Virtual Private Systems for FreeBSD
VPS is a new os-level based virtualization implementation for FreeBSD. It is highly experimental.
Check out the following site for more detail and a video demonstration:

http://www.7he.at/freebsd/vps/

The author, Klaus P. Ohrhallinger, will be presenting this work at EuroBSDCon 2010. View the presentation information and download the paper:

http://2010.eurobsdcon.org/presenta...tx_ptconfmgm_controller_detail_paper[pid]=299
 

S3TH76

Member


Messages: 29

#18
Hi, I have some questions to ask about visualization on FreeBSD if there is anyone could answer me.

I have an server dual-core @ 3.4 Ghz, 8Gb RAM, 256Gb HDD, OS: FreeBSD 10.1

...and I want to install several virtual machines with Win7, Win server 2008 R2, Win server 2012 R2, each about 20-30Gb, on my FreeBSD server to give users access at services hosted on those virtual machines. How can I do that? What solution for visualization do you recommend.

I read any posted article that I found about visualization until now: http://www.virtualbox.org/manual/ch10.html - but didn't found any references to FreeBSD as host OS with Windoze Windows as guest OS and neither any steps to do that.

On https://www.freebsd.org/doc/handbook/virtualization.html it says more about how to create a VM under FreeBSD (I choose bhyve solution as a short exercise) but not about how to create a VM with Windows guest OS.
For example: For a Windoze Windows guest OS what kind of VM bhyve accepts, what extension must have? Accepts ISO's? How MUST be loaded, how CAN be loaded?

Any help and suggestions is appreciated.
 

PacketMan

Aspiring Daemon

Thanks: 107
Messages: 773

#20
Hi, I have some questions to ask about visualization on FreeBSD if there is anyone could answer me.

.......

I read any posted article that I found about visualization until now: http://www.virtualbox.org/manual/ch10.html - but didn't found any references to FreeBSD as host OS with Windoze Windows as guest OS and neither any steps to do that.

Any help and suggestions is appreciated.
I didn't try bhyve(8) yet. I did follow this guide and got VirtualBox up and running.
https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/virtualization-host-virtualbox.html

I've since installed FreeBSD and Ubuntu Linux server in a few VM instances. It was all a pretty easy experience. Whether I did it right or not is yet to be determined. :p

I haven't had much time to much more since, but my next steps are to build a virtual Juniper network inside the Linux server(s) instances. And assuming that all goes well then I will try to build and overlay SDN deployment on top of it.