FreeBSD 9.0-RELEASE on KVM: horrible IO perf, VirtualBox OK

Hi folks,

I am a super-novice FreeBSD user, in fact I started reading the handbook just yesterday, so please have some indulgence for me. I want to use FreeBSD 9 as a guest OS in our RHEL-based KVM cluster as one of the build slaves, however first I would like to test how it generally works on KVM and get more familiar with it.

I have a laptop running Ubuntu Maverick with qemu/kvm 0.12.5 and a SATA drive, 2xRAW 8G disk images and 4096M of RAM for the VM. I have installed a FreeBSD guest machine following the ZFS mirror root tutorial and the full installation took about 2 hours.

I found this worrying and tried simple IO tests with dd. iostat inside the VM shows 0.5 MB/s write performance (and so does zpool iostat -v). So I installed the virtio-kvm drivers inside the VM and replaces the devices, the did show up as vtbd0/1, but the performance didn't improve.

So for a comparison I installed latest VirtualBox 4 and converted the very same images to VDI, created an otherwise comparable VM and started it. The disks came up as ada0/1 and under the same conditions the write performance was at least 50 MB/s (!).

The Linux guests exhibit similar write performance under KVM, so there is definitively some problem with FreeBSD. I can't use VirtualBox in production, so I'd really like to know what's up with KVM...

Could anybody please share the experiences of running latest FreeBSD under KVM and otherwise help me to track down the performance problem?

Thanks!
 
Hi,

the only thing that I can tell you for certain is that the nature of COW of ZFS really needs you to allocate the full disk space to the guest OS otherwise your performance will be very poor.

ZFS would not also be my first choice for a FreeBSD guest OS under any virtualization platform. You might want to try UFS2+J which is the default in FreeBSD 9.0-RELEASE.

I don't have experience with KVM production servers but I do use virtualization in my lab, mainly for testing upgrades before going production. I have seen FreeBSD 9 having better performance in Linux KVM (Ubuntu 11.10) than Vmware ESXI.

Best Regards,
George
 
Hi George,

Thank you for you speedy reply; by the way, I used your tutorial, if I'm not mistaken by the nick, thanks for that too!

You are right, ZFS setup is certainly suboptimal. What I wanted to achieve is to not only have a FreeBSD system to play with to see how I can set it up as a build slave, but also try the ZFS bootable root feature to evaluate FreeBSD for our next storage server outside of the virtualization context (since thanks to Oracle we can no longer afford Solaris even on Sun hardware).

Having that said, I first tried QCOW2 images with KVM, and then converted them into RAW images, which I assume are pre-allocated and this didn't make any difference:

Code:
sudo qemu-img convert -O raw /srv/kvm/freebsd-zfs-test1.img /srv/kvm/freebsd-zfs-test1.raw
sudo qemu-img convert -O raw /srv/kvm/freebsd-zfs-test2.img /srv/kvm/freebsd-zfs-test2.raw

At the same time, I have literally 100 times better performance with VirtualBox VDI (differential) images, which if I am not mistaken are also based on COW technique:

Code:
sudo VBoxManage convertdd /srv/kvm/freebsd-zfs-test1.raw /srv/kvm/freebsd-zfs-test1.vdi
sudo VBoxManage compact /srv/kvm/freebsd-zfs-test1.vdi

So I am quite sure there is more to it than COW issues with ZFS. Of course for the final build slave installation I will follow your advice to use UFS2+J.

Could you please tell me what kind of IO performance did you have on linear dd writes under Ubuntu? Did you use virtio-kmod or just regular IDE disks?

I wonder if this has to do with my Ubuntu being 2 years old, but then again, RHEL6 ships kvm 0.12 if I am not mistaken and who knows if they have whatever FreeBSD IO performance patches backported...
 
Thanks, you probably did use my guide!

I am under the impression that -o doesn't preallocate the whole disk, though it looks like. With qemu-image info you'll see that. It just makes disk access faster later on for the guest.

In my Ubuntu server I do get around 35-40 MB on dd writes (full ZFS install) with plain IDE drives.

However, since I am just a newbie Linux KVM user I prefer to use tools like virt-manager to create and manage the virtual machines!

There is an nice article here that you might find interesting regarding RHEL and KVM.

Regarding virtualbox, I am afraid that my experience is very limited. I just use it in my desktop (FreeBSD & MAC) for quick tests.
 
VirtualBox defaults to AHCI-based SATA controller, which will be much faster than the default IDE/non-AHCI SATA controller that KVM defaults to. AFAIK, there's no support for AHCI in KVM.

But, you can use SCSI-based disk controllers in KVM, which will be much faster than the IDE-based disk controllers. That's what we used with our FreeBSD guests and got good performance out of them.

Same for networking. VirtualBox will default to e1000 (em(4)) while KVM will default to <forget the name> (lnc(4)). If you configure the KVM VM to use e1000, things will be much faster.

Don't know anything regarding the state of virtio drivers for FreeBSD. Last time I checked (awhile now) they were barely usable, and the emulated e1000 and SCSI drivers were much faster.
 
gkontos said:
In my Ubuntu server I do get around 35-40 MB on dd writes (full ZFS install) with plain IDE drives.

Yes, this is what I would expect too, but 0.5 MB/s is plain shocking.

There is an nice article here that you might find interesting regarding RHEL and KVM.

This article is very interesting. I did some tests by setting cache='none' for driver and with virtio-blk I now get ~2 MB/s, which is indeed an improvement. Still, Linux guests are 20 times faster on linear writes.

phoenix said:
But, you can use SCSI-based disk controllers in KVM, which will be much faster than the IDE-based disk controllers.

This didn't seem to make any difference, unfortunately. The performance still sucked. But what I also did was to try putting the images on tmpfs and boot from there. In this scenario the system performed amazingly well showing hundreds of megabytes of throughput per second irrespectively of whether I was using IDE / SCSI or VirtIO. So apparently the problem is in how host software interacts with real storage...

Ok, this all seems to be quite strange to me. If you are saying that you were able to achieve decent throughput with KVM, my next bet is that there is something wrong with the version 0.12 of KVM shipped with Ubuntu.

Next week I will try the same virtual machine on another machine that runs the absolute latest stable version of Ubuntu and see if it works any better. Then I can also try to run this image without network on the production RHEL machines and see if its version of KVM 0.12 performs any better.
 
zaytsev said:
Ok, this all seems to be quite strange to me. If you are saying that you were able to achieve decent throughput with KVM, my next bet is that there is something wrong with the version 0.12 of KVM shipped with Ubuntu.

KVM 0.12 is ancient. You want at least 0.14 or newer, preferably 1.0 if you can install on your version of Ubuntu. There were many performance issues in 0.12 and previous.
 
phoenix said:
Don't know anything regarding the state of virtio drivers for FreeBSD. Last time I checked (awhile now) they were barely usable, and the emulated e1000 and SCSI drivers were much faster.

I haven't tested them myself but there's a port for them now: emulators/virtio-kmod
 
Sorry for slight necro, this was top search on all my googling for this problem. My fix for using FreeBSD 9 on kvm is to get an install done to satisfaction on Virtualbox. Convert the VDI to RAW and then onto QCOW2, import the qcow2 into a kvm and then use that base install to try and get virtio running. (This part might be best done in virtualbox as well).

Nothing I tried got around the bad IO on kvm, even using AHCI only sometimes worked relatively quickly so until virtio is accessible on install or AHCI support matures (if it is already it doesn't work for FreeBSD :S).

Anyway, this is just for those who get this page on Google. Again, sorry for necro :).

Maq
 
A further update to my last post; it seems using KVM, FreeBSD and an image file is the issue with disk IO. Both qcow2 and raw files are slow regardless of 4 different controller types you can use, even on virtio I can't write more then 1Mb/s on average.

However I use FreeBSD 9 to access several physical drives on a raidz, and performance on that isn't that bad at all. I'm getting 30Mb/s over SCP using virtio as the controller, I don't have figures for native access though, so I'm not sure how much better it could be.

So if possible use LVM on the Linux host and let KVM talk to bare metal for write performance, avoid using image files for now. Especially qcow2. On a side note I'm using UFS on the image file.
 
I haven't done any benchmarking but I have noticed that native IDE drives on raw images work much faster than any other combinations that I have tried.
 
I also have this problem. I did all of the above with no luck, including LVM, and I also tried it under CentOS 6 with last KVM and got the same behavior. I had same problem with FreeBSD 7 and 8 and pfSense. Did anyone solve this problem?
 
I'm sure the virtio driver is still classed as beta in FreeBSD but I'm not happy with its write performance.

At the moment I've settled on SCSI with caching disabled for the raidz I have. Write perfomance is consistant at about 30mb/s and the host's load doesn't change much. On AHCI/SATA/IDE/Virtio the load average would increase gradually with all the I/O waiting.

Just in case it helps, my host is running Gentoo with qemu-kvm version 1. The centos 6 kvm is version 0.12 I believe? Very out of date as mentioned.
 
Thank you maquis196, you are the man. I finally got write I/O of steady 96M/s over LVM device. Funny it is not written in any documentation.
 
Back
Top