Galactic_Dominator said:
Thanks for the links, I'll have to check them out. The one downside of all the OSS VM solutions is the lack of good, usable, VM management frameworks that don't require Linux on every piece of hardware (why run Linux on the storage boxes when there's ZFS available?) or X installed everywhere or some arcane XML configuration schema. Seems like everyone with large VM setups is still using home-grown shell scripts.
I generally also like the additional features you get from VBox VM like the SAMBA share, VRDP, guest additions, etc.
Guess it depends on what you are virtualising. None of that is useful for Linux/FreeBSD VMs running headless on a server, but are very useful for running VMs locally on client machines.
That's pretty impressive, I probably wouldn't have even attempted KVM or VBox on a Zimbra install of that size.
I know. It boggles the mind reading through the Zimbra forums about all the massively huge hardware setups people have for Zimbra installs with 1000-1500 accounts (2-3 separate mailbox servers, separate ldap server, proxy servers, etc; or massive VMWare cluster setups, etc). Either we're doing something *very* wrong on our setup, or we've figured out how to make Zimbra fly.
Granted, we don't use Zimbra as an SMTP server, nor as an A/V, A/S server, (separate Postfix box for all that) but everything else is enabled in one VM, with 8 (sometimes 10) other VMs.
Esprvislly when you consider how horrible our current storage setup is (12x 400 GB SATA harddrives in a single RAID6 array, auto-carved into 2 TB LUNs, then stitched back together in the VM host using LVM, then carved up into logical volumes for each VM).
I would have assumed there was a great deal of network traffic
Nope, surprisingly little network traffic. Rarely above 10 Mbps, generally under 1 Mbps. [shrug] Either that, or my SNMP monitoring of network traffic is very wonky.
As far as the virtio stuff, disk IO under KVM seems to benifit greatly from it.
Only if using KVM versions newer than KVM-72. KVM-72 has a nasty memory leak that causes the VM host to run into swap and bog right down. Unfortunately, KVM-72 is what ships with Debian Lenny. And getting anything newer requires a lot of "upgrades" from backports.
And there are a few other versions along the way to 0.12.x that have other nasty regressions in virtio. 0.12.x is running nicely, though.
Net IO though, there isn't too much difference between that and the main Intel emulated NIC. Nearly the same throughput and CPU load. Plus the virtio drivers are a PITA to install on newer versions of windows, Server 2008 specifically.
For our setup, using virtio net for Linux VMs has reduced the CPU usage on the host, especially for the 2 Windows XP VMs. I haven't tried to use the virtio block drivers in XP, nor the virtio net driver in Windows Server 2003. Would be interesting to try PCI passthrough, though, to see what difference it would make.
One thing I do miss from VBox, though, is the private VM-to-VM network setup. That's something I haven't found an easy way to replicate in KVM without pulling in lots of other not-always-faster software networking stuff.