bhyve A solid HOWTO with BHYVE on FreeBSD 14 and Windows 11

I am glad to hear that you got it working!

I forgot to point that out explicitly but yeah: With Microsoft Windows you have to specify the sockets, cores and threads explicitly.
As per my recollection, if you only specify the number of CPUs, bhyve creates just N sockets with one core each.
This creates all sorts of issues with Windows guests. The regular Windows desktop editions simply do not support multiple sockets (or more than 2?) and with the Server editions require you to license per socket. Therefore, you end up in a situation where your Windows guest just has one (or two?) CPUs available.
Or something along those lines... not a Windows expert - just remembering this from past troubleshooting sessions.
This all makes sense. Obviously, I'm also not a Windows expert (very from from it, in fact.) I appreciate your help!
 
Could you post speed tests of windows barebones, disk speed, CPU benchmark, then with it virtualized passing through the GPU. I see many benefits with zfs snapshots to rollback windows VM, but I don't want my compile times to suffer compiling apps for windows and android.

I was thinking of trying same thing passing through GPU and some USB ports for a KVM to switch over mouse and keyboard.

On my dell server I have successfully bhyve'd freebsd, Ubuntu, windows 11, windows server, openbsd, netbsd and home assistant. I find I have most of them turned off at home and only home assistant always running. Successfully passed through some USB ports there so it's prepped for zigbee and zwave if I need it. I guess most people pass through GPU to work with cuda.

Currently I use a KVM switch to switch back and forth between windows 11 machine and a Mac mini m4, depending what platform I need to compile for, but I think windows 11 under freebsd would be cool
 
Are any of you bhyve experts good at getting a GPU-accelerated VNC or RDP session up and running on a Xubuntu guest?

I have a Xubuntu guest that I can connect to using x11vnc that has the console Xorg instance (DISPLAY=:0) bound to it, and Xorg is forced to fire up using a fake screen0 as there is no physical display connected to the passed-through GPU, it is headless (it has to be headless as I'm using the second D300 on a Mac Pro trash can and that has no physical display output). This works fine but uses software rendering.

Running lspci | grep VGA in the Xubuntu guest returns the GPU correctly, it recognises it as an AMD Curacao XT. Linux will not bind either radeon nor amdgpu to it. in Xorg.0.log it kicks the module out.

The device is attached to the guest at PCI 00:05.0 - in the file below the device named AMDGPU is actually trying to use the radeon driver, I just didn't update the device name for this run.

I'm testing using DISPLAY=:0 glxinfo | grep 'OpenGL' which returns llvmpipe (software rendering) and lspci -k -s 00:05.0 which returns "kernel modules available: radeon, amdgpu" but does not return any modules in use.

Any suggestions? Attached is the Xorg.0.log from the Xubuntu guest.

UPDATE: I tried to get it working as pass-through to Windows 10. Same symptoms, can detect the hardware, can install the drivers, but the operating system kicks the drivers off the device due to some problem. I'm going to assume that particular GPU has a hardware fault.
 

Attachments

I replaced the passed in GPU with a different one, known working. Same issue. I'm not convinced passing through the GPU to the guest can result in a hardware-accelerated X or Wayland session unless you are physically connected to the passed-through device, perhaps? The VNC protocol seems to prevent it, and Linux seems to always bounce the amdgpu module off the passed-through device despite me seeing it perfectly showing in the linux guest via lspci -v.

Does anyone else have a truly hardware-accelerated Windows or Linux guest running bhyve that isn't using VESA or Microsoft Basic Display Adapter?
 
Back
Top