bhyve Windows server with virtio disk in Bhyve does not work

Hi

I have as of FreeBSD 13 (at last :D) moved my personal virtualisation server from CentOS/KVM over to FreeBSD/Bhyve. Works like charm for all VMs, no matter if they use grub, uefi or bhyveloader. Performance, VLAN:s and everything else works perfect as well. FreeBSD 13 performance is *r e a l l y* great. Good job with FreeBSD 13, bhyve etc. Thank you developers!!!

I have however noted a problem that others may have noted too.

I have a windows server 2019 VM that caused a problem during migration over to bhyve. It is OK and usable now, but not optimal. Can live with it, but want to ask the community...

When used KVM on CentOS 8:
Used virtio network driver and virtio disk drivers (for boot disk as well).

I had to install a second disk in W2019 and after that install the virtio disk driver. After that I could shut down the vm, switch the boot disk over to virtio, boot, and it worked permanently.



After going over to Bhyve on FreeBSD 13.0-RELEASE-p2:
The W2019 did not boot, but immediately crashed. I hade to switch from the originally configured disk0_type=“virtio-blk" to disk0_type=“ahci-hd" and go through a recovery process with windows 2019. After that the W2019 worked OK with virtio network driver but with no disk on virtio.

Now… I have tried to add a second separate not boot disk with virtio and kept boot disk as non virtio. I have also tried lastest and a few older virtio disk driver releases. I does not matter what I do, windows 2019 crash during boot. Directly...




I have seen complaint that Microsoft is to blame for this. But it do work flawless in KVM… I do have superfast SSD:s for the OS disk, so it works ok on non virtio disk drivers. But I wonder if there is anybody that have a solution of how to make a virtio disk to work on windows server. Boot disk or not. It does not work at all today. At least not in FreeBSD 13-RELEASE-p2.



Tnx
Peo
 
I have seen complaint that Microsoft is to blame for this.
The Windows installation will disable any and all other disk controller drivers except for the ones it found during installation. So if you change the underlying controller Windows can't do anything else but STOP because it just can't find the disks to boot from. It's the same when you switch from IDE to AHCI in the BIOS. You'll need to enable the AHCI driver before doing that or Windows will just boot into a blue screen.

And I'm honestly not sure if Windows even has drivers for virtio-blk devices. It certainly doesn't have drivers for virtio-net because you need to install those separately.
 
I hear you... But I can for sure see it works in KVM. And the disk performance is better with the virtio. I can live with not using virtio on boot disk as I have super fast SSD:s and a lot of RAM for the VM with ZFS on host that has over 100GB RAM to use :)

But if you are right in this I of course wonder why it works under KVM.


It is simple....

W2019 server on KVM
- Virtio network works
- Virtio boot disk works (but only of you first add a second virtio disk so the driver load.)
- Virtio second disk works

W2019 server on Bhyve FreeBSD 13 release p2

- Virtio network works
- Virtio boot disk does not works
- Virtio second disk does not works
i.e a virtio disk does not work at all here


Note that it doesn't work in KVM either on boot disk unless you add a disk 2 first with virtio. That is the key to use the virtio on boot disk. But do note that in Bhyve it doesnt matter even if you skip virtio on the boot disk and just try to use it on a second not used disk. So I am not entirely sure I agree with you :)

As said. I can live with it. But I of course want to note it so it maybe will work later on in FreeBSD.... And maybe someone here on the forum have done a deeper analysis than me.
 
Addition...

--snip--
And I'm honestly not sure if Windows even has drivers for virtio-blk devices. It certainly doesn't have drivers for virtio-net because you need to install those separately.
--snip--

Yes. As I noted I have installed the needed drivers virtio disk and network drivers separateley. And also tried different versions.
 
Ideally you'd hope it would *just boot* if moving from KVM that was using virtio-blk devices to a bhyve host using the same devices.

Obviously changing drivers is commonly a problem with Windows (god knows why they've never done something about it). Just out of interest have you tried the nvme option? It's the current preferred choice for newer Windows versions as it is usually the highest performing and is supported without third party drivers.
 
Ideally you'd hope it would *just boot* if moving from KVM that was using virtio-blk devices to a bhyve host using the same devices.

Obviously changing drivers is commonly a problem with Windows (god knows why they've never done something about it). Just out of interest have you tried the nvme option? It's the current preferred choice for newer Windows versions as it is usually the highest performing and is supported without third party drivers.

usdmatt You seems to have the same option as SirDice , that it probably is a windows issue even though it works in KVM :) Could be, but the test I have done doesn't go in line with that. But I can have missed something or draw the wrong conclusions based on the facts I have of course. That is why I post :)

Regarding nvme.... I have no nvme disks in the host server. But that maybe does not matter? And how could nvme boost disk performance as virtio? It works stable now when I backed out from virtio-blk to ahci-hd. And I have reasonable performance (well.. really good actually..) with it as the host SSDs are the best and the host disk cache is h u g e :)

This leads to two questions...

Where can I read more about the nvme option for bhyve? Interesting, but I have only seen the text in the bhyve config.example which does not say that much.

Shall I consider windows server virtio disk drivers as dead if using Bhyve ?

Tnx in advance
 
Check the driver version in windows device manager for the storage controller. I experienced trouble with newer drivers, for me the latest working storage driver was in virtio-win-0.1.187
 
Hi:

I just wanted to post here as i had the exact same issue as OP but i managed to resolve it.

Just like OP would bdos when trying to swap to virtio blk drivers for windows 10 enterprise despite following the many guides out there, most of which tell you to create a dummy storage device requiring virtio. And then installing the drivers for them, rebooting and swaping boot partition to virtio.

In the end the issue was simply the virtio driver versions, it seems that the latest 208 stable does not work with bhyve freebsd version 12.2.
When i uninstalled the driver and repeated the above steps with virtio 187, the VM successfully posted.
I wanted to let you all know so most of you dont end up wasting time using latest virtio drivers.

Note: although the vm with 208 did not work with freebsd it did with proxmox. This to me suggest a bhyve issue more than anything else.
 
I have been using churches vm-bhyve and window server 2019 does work with the following. I have done fresh install and a kvm to bhyve migration.

I converted to raw with qemu-img then tried creating a new windows machine and using dd to copy before I realized the install I was migrating was on BIOS/MBR. So took the copy of the image (left the live machine running elsewhere) and booted in windows recovery mode and converted it to UEFI/GPT. Under KVM I have done the swap dummy device install virtio an switch before but couldn't get it working yesterday then realized it wasn't needed if using nvme under bhyve. It easily booted under Bhyve.

Anyway here is the vm configure for the Windows_Server 2019 machine, shamelessly taken from other sources.

# https://klarasystems.com/articles/from-0-to-bhyve-on-freebsd-13-1/
# If you want to pull a graphical console, you'll need the UEFI loader,
# no matter what OS you're installing on the guest.
loader="uefi"
graphics="yes"
xhci_mouse="yes"

# If not specified, cpu=n will give the guest n discrete CPU sockets.
# This is generally OK for Linux or BSD guests, but Windows throws a fit
# due to licensing issues, so we specify CPU topology manually here.
cpu=8
cpu_sockets=1
cpu_cores=4
cpu_threads=2

# Remember, a guest doesn’t need extra RAM for filesystem caching--
# the host handles that for it. 4G is ludicrously low for Windows on hardware,
# but it’s generally more than sufficient for a guest.
memory=16G

# put up to 8 disks on a single ahci controller. This avoids the creation of
# a new “controller” on a new “PCIe slot” for each drive added to the guest.
ahci_device_limit="8"

# e1000 works out-of-the-box, but virtio-net performs better. Virtio support
# is built in on FreeBSD and Linux guests, but Windows guests will need
# to have virtio drivers manually installed.
#network0_type="e1000"
network0_type="virtio-net"
network0_switch="public"

# bhyve/nvme storage is considerably faster than bhyve/virtio-blk
# storage in my testing, on Windows, Linux, and FreeBSD guests alike.
disk0_type="nvme"
disk0_name="disk0.img"

# This gives the guest a virtual "optical" drive. Specifying disk1_dev=”custom”
# allows us to provide a full path to the ISO.
#disk1_type="ahci-cd"
#disk1_dev="custom"
#disk1_name="/zroot/bhyve/.iso/virtio-win-0.1.240.iso"

# windows expects the host to expose localtime by default, not UTC
utctime="no"

uuid="yourownid here"
network0_mac="yourownmachere"

graphics="yes"
graphics_port="5910"
graphics_listen="0.0.0.0"
#graphics_res="1920x1080"
graphics_res="1600x900"
#graphics_res="1280x720"
#graphics_res="1024x768"
#graphics_res="800x600"
graphics_wait="auto"

My problem now is I finally got virt-manager working but I setup networking with vm-bhyve with vm switch create and can't seem to get virt-manager to give me the option to use virtio networking. Any ideas?

I did include these in /boot/loader.conf
# vmm needed for virtualization support
vmm_load="YES"
nmdm_load="YES"
virtio_load="YES"
virtio_pci_load="YES"

Any help appreciated, thanks!
 
Found the solution for the virt-manager networking. I setup my vm-bhyve networking simply enough...

My bge1 interface is on 10.200.10.231 with gateway 10.200.10.1 in rc.conf no other bridges defined in rc.conf. The bge0 device is used for an internal network and natting (testing setup ipfw to nat external ports to internal with the same IP as the host, works well for virtual mail server)

vm switch list
# to start over
switch destroy public
switch destroy vswitch

vm switch create -a 10.200.10.232/24 public
vm switch create -a 10.88.88.1/24 vswitch
vm switch add public bge1
vm switch add vswitch bge0
vm switch list
NAME TYPE IFACE ADDRESS PRIVATE MTU VLAN PORTS
public standard vm-public 10.200.10.232/24 no - - bge1
vswitch standard vm-vswitch 10.88.88.1/24 no - - bge0

Just edit the xml and change ignore error and it will work with default vm-bhyve network setup using virtio. After making the switch an ifconfig -a shows the public switch as vm-public so use vm-public as the bridge "Device". Go back to detail mode and it will display a warning "Failed to find a suitable default network". Starting the vm and everything works as expected despite the error. You can also just define the virtual switch address as the network address like 10.200.10.0/24.

The nice thing about virt-manager is the ability to add connections to other hypervisors. The drawback is that vm's created with virt-manager do not show up with vm list and visa-versa. I was able to boot off the same image first with vm start VM and the VM didn't show running under virt-manager. Stop that and start with virt-manager and it doesn't show up with vm list, so I would guess it's better to choose one or the other. Also I couldn't get the nvme option to work with virt-manager...

I like using vm-bhyve but some collegues want their gui and if managing more than a few hypervisors virt-manager is pretty slick.
 
Back
Top