bhyve Windows 10 is unacceptably slow after 13.1 → 14.0 upgrade

My Windows 10 guest has been perfectly running for many years, but now it's damn slow. E.g. starting Outlook takes several minutes. Whereas from Windows' perspective CPU usage doesn't exceed 4-5%.
The FreeBSD host itself doesn't exhibit any issues (Xeon E5-1650 with 32GB of RAM).

To exclude possible coincident with a Windows failure I tried a different Windows 10 image and got the same behavior. A Debian guest seems to be good as well.

Thanks for ideas!
 
The release notes for FreeBSD 14 list a few changes for bhyve(8). I don't know much about bhyve, but did any of its configuration files change after the upgrade? Have you used any tools from within the FreeBSD host to monitor bhyve, or done any monitoring at the network layer? Outlook probably makes a lot of requests over the network when it first starts up; is it fine once it's started? Are applications that don't use the network (if there are any!) equally affected?
 
...a few changes for bhyve(8)
...did any of its configuration files change after the upgrade?
...Outlook probably makes a lot of requests over the network when it first starts up
Well, I don't use the features affected by the changes. And bhyve() doesn't have configuration files.
Outlook was just a bright example. Everything is unbearably slow: opening This PC or Control Panel, launching cmd.
 
So you're thinking that the version of bhyve that comes with 14.0-RELEASE doesn't like your win10 image for some reason?

Yeah, and bhyve is part of base since 10.0-RELEASE... and the bhyve Handbook chapter mentions that it likes ZFS... Did bhyve not like ZFS before?
 
This box has been always running on ZFS starting from 11.0. I always had a Windows guest in bhyve(): first Win7, then Win10. They perfectly performed, including the USB PCI controller I pass-thru. I tried disabling the PPT device, didn't help. Now I'm going to install Windows 11 from scratch.
 
I have recently upgraded from 13.2 to 14.0 (root on zfs) and I didn't see any slowing for a windows 10 VM. Can you share the command line you use to launch it? I will compare to mine if this can help...
 
What storage type are you using with those VMs? virtio or nvme?
I've had flaky and (heavily) fluctuating performance with windows on bhyve with virtio storage in the past (on FreeBSD and illumos/smartOS), with some VMs up to the point where it varied with every reboot if the VM performs normal or is unbearably slow. This wasn't an issue with any other guest OS, but on windows this was completely gone when switching to nvme.

(disclaimer: I'm not running 14.0-R in production and no windows-VM on the single host running 14.0-R, so YMMV)
 
Can you share the command line you use to launch it?
Code:
bhyve \
      -c 4,sockets=2,cores=2 -S \
      -s 0,hostbridge \
      -s 3,ahci-hd,$HD0,sectorsize=512 \
      -s 4,ahci-hd,$HD1,sectorsize=512 \
      -s 5,fbuf,tcp=0.0.0.0:5903,$DPY \
      -s 6,xhci,tablet \
      -s 10,virtio-net,$IF,$MAC \
      -s 31,lpc \
      -l bootrom,$UEFI \
      -m $MEM -H -w \
      $VM
What storage type are you using with those VMs? virtio or nvme?
...on windows this was completely gone when switching to nvme.
As you can see above I'm using ahci (haven't change that script for many years).
I'll try nvme, good idea, thanks!
 
Note that your windows might fail to boot if you change the disk controller. It'll only enable to the right driver during install time. Changing the disk controller afterwards would mean it has to use a driver that might be disabled. You'll end up with a big blue STOP error. So enable the correct driver in the Windows registry before changing the controller.
 
Meanwhile I installed Windows 11 from scratch and am using the same script to launch it: seems to be perfectly working.

Changing the disk controller afterwards would mean it has to use a driver that might be disabled.
Is it possible that I'm already in this situation due to some implementation changes?
 
I'm also using ahci-hd IIRC. Option -A and maybe -P are set. I will look at that this evening. Have you tried to add the -A option?
 
you could also attach another (dummy) disk via nvme, boot the VM and let windows install/enable the approrpiate driver. after that you should be able to change to nvme for the other disk(s) without windows blowing up at the next boot.
 
Based on SirDice 's comments, I'm beginning to think that bhyve in 14-RELEASE has some different defaults than it had in 13.1-RELEASE, and existing win10 image did not like that when it got moved into updated digs. Kind of like discovering that the bathroom light switch is on the outside of the bathroom when I'm used to that switch being inside...
 
you could also attach another (dummy) disk via nvme, boot the VM and let windows install/enable the approrpiate driver.
Not sure if this would also enable the driver at boot. But I guess it's worth a shot.

Just keep in mind that you might run into a STOP INACCESSABLE_BOOT_DEVICE error if you change the controller type after the initial installation.
 
I can confirm that the -P option is enabled in my Windows 10 VM. But, if I believe my tests that's not relevant for your problem.

I have to say that I had a funny moment when I tried to remove the -P option. This had the effect to corrupt the BHYVE_UEFI_VARS.fd file. After that, the VM was unable to start (error 134). That said, it seems not reproducible.
 
Changing ahci to nvme doesn't help. Both Windows 10 and 11 booted without issue after such change.
Also, I reported that a fresh Windows 11 runs fast, maybe it was my first impression, but now it exhibits exactly the same behavior as Windows 10. E.g. it takes ≈10 seconds to display the output of ipconfig after hitting Enter...
 
E.g. it takes ≈10 seconds to display the output of ipconfig after hitting Enter...
That might point to the network card driver, not the storage controller. Is this with other network based commands too?
 
Address Space Layout Randomization was enabled by default in 13.2, I believe. If you're going from 13.1 to 14.0, then perhaps ASLR is problematic with bhyve/Windows?
 
perhaps ASLR is problematic with bhyve/Windows?
I disabled ASLR, but it didn't improve anything...

That might point to the network card driver, not the storage controller. Is this with other network based commands too? The
I ran iperf3(): it's 5Gb/s in one direction and 4Gb/s in another. Also, it's not just ipconfig, sometimes even dir takes 10-20 seconds to display output.

I removed -H flag from bhyve's command line (man: Yield the virtual CPU thread when a HLT instruction is detected), and got bhyve consuming 400% of CPU (with 4 cores for the VM), meanwhile Windows shows only 2-4% of CPU utilization.

I just installed a fresh 14.0 on a USB SSD in the same box, and it has no issues with running the same images of Win10 and Win11 in bhyve()!
So, something has happened during the update... I'm not sure how to find the root cause though.
 
I just installed a fresh 14.0 on a USB SSD in the same box, and it has no issues with running the same images of Win10 and Win11 in bhyve()!
So, something has happened during the update... I'm not sure how to find the root cause though.

Have you tried running freebsd-update IDS on the system you upgraded to see what files from base differ from their release versions?
 
Back
Top