bhyve bhyve I/O performance

I'm running bhyve vms that show very bad I/O performance and I'm wondering what factors may be causing this?

A couple details:
  1. I'm testing with a simple dd if=/dev/random of=test bs=1M status=progress and yes, that's not very scientific. Since I'm seeing many factors worse I/O perf than expected, I assume bonnie or other tools will not get any better speed either, however.
  2. host is 12.3-RELEASE-p5, ample RAM - zfs raidz2 with around 250MB/s I/O; disks are 4k native and aligned correctly (ashift=12); I/O on the host is consistently ok, no lags like they appear on the vm side
  3. guest is 13.1-RELEASE - limited to 1G RAM to simplify testing so I'm not looking at cache perf but "disk" activity; the guest has two disks: one OS and one 8GB test disk; both are backed by a zfs zvol and the guest formats them with UFS. os disk obviously runs with gpt, the test disk is plain UFS.
  4. I've tried different storage kinds: virtio-blk, ahci-hd and nvme - I'm getting between 35MB/s (ahci) and up to 60MB/s for nvme
  5. I've tried UEFI vs non-UEFI boot, tried the /usr/share/examples/bhyve/vmrun.sh script instead of my own but no improvement there
  6. Watching I/O performance on the host via zpool iostat 1, I'm seeing "breaks" where the host kind of idles out while the guest is doing its writes - but not sure what to make of it.
  7. I attempted to switch sectorsize with virtio-blk, i.e. I set it to 4096 to reflect 4k block size. The result was that the vm became more responsive but I/O got slower
  8. when running dd in the guest, I'm seeing "waiting" times, i.e. dd shows it's timer at 2s, then it sits there and about 30s later, it jumps to 32s and, sits again and updates again after i.e. another 6 seconds. It's like I/O is getting out to the host in "bursts"?
I'm willing to post a full bhyve command as a sample; since I've tried different variations and ways to start the test vm, I figured it probably won't add much except overloading the post...

I'd expected to get at least up to 100MB/s but I'm hitting some ceiling at 25% of host performance. What's a reference value I should be able to expect?

Anyone got any ideas, what I can tweak to improve the I/O performance? Short of ripping out the backing zfs storage, obviously.
 
Hi ,
Code:
6. Watching I/O performance on the host via zpool iostat 1, I'm seeing "breaks" where the host kind of idles out while the guest is doing its writes - but not sure what to make of it.

I think that these "breaks" are normal I run the same test on my bare bones servers and machines and do the same

Code:
8. when running dd in the guest, I'm seeing "waiting" times, i.e. dd shows it's timer at 2s, then it sits there and about 30s later, it jumps to 32s and, sits again and updates again after i.e. another 6 seconds. It's like I/O is getting out to the host in "bursts"?

yes,this seems like I/O bottleneck
 
did you try to switch from virtio to nvme? In my case performance improved with nvme. An aditional advantage is that you do not need the virtio drivers in the VM.
 
Back
Top