bhyve bhyve GPU Passthru gives black screen in VNC - not even a "Tianocore" message

I'm using sysutils/vm-bhyve to manage my vm. I created a win11 vm using a few online tutorials without enabling gpu passthru and it seems to work fine. However, the second I add GPU passthrough the connection to the VM yields a black screen with no mouse and it simply sits there.

AMD threadripper processor and the GPU being passed is a 7900XT living at pci0:67:0:0 for video and pci0:67:0:1 for audio. I believe I have everything in my bios set up correctly. dmesg|grep AMD-V shows:
Code:
AMD-Vi: IVRS Info VAsize = 64 PAsize = 52 GVAsize = 3 flags:0
ivhd0: <AMD-Vi/IOMMU ivhd with EFR> on acpi0
ivhd1: <AMD-Vi/IOMMU ivhd with EFR> on acpi0
ivhd2: <AMD-Vi/IOMMU ivhd with EFR> on acpi0
ivhd3: <AMD-Vi/IOMMU ivhd with EFR> on acpi0

my /boot/loader.conf

Code:
kern.geom.label.disk_ident.enable="0"
kern.geom.label.gptid.enable="0"
cryptodev_load="YES"
zfs_load="YES"
if_re_load="YES"
if_re_name="/boot/modules/if_re.ko"
pptdevs="67/0/0 67/0/1"
vmm_load="YES"
fusefs_load="YES"
hw.vmm.amdvi.enable="1"

Relevant lines in /etc/rc.conf
Code:
vm_enable="YES"
vm_dir="/vm"

My config file (minus UUID and MAC):
Code:
loader="uefi"
graphics="yes"

xhci_mouse="yes"
cpu=24
cpu_sockets=1
cpu_cores=12
cpu_threads=2
memory=24G

# put up to 8 disks on a single ahci controller.
# without this, adding a disk pushes the following network devices onto higher slot numbers,
# which causes windows to see them as a new interface
ahci_device_limit="8"

# ideally this should be changed to virtio-net and drivers installed in the guest
# e1000 works out-of-the-box
network0_type="e1000"
network0_switch="public"

disk0_type="nvme"
disk0_name="disk0.img"

# windows expects the host to expose localtime by default, not UTC
utctime="no"

graphics_res="1920x1080"

sound="yes"

passthru0="67/0/0=12:0"
passthru1="67/0/1=12:1"

Commenting out the two passthru lines and the VM boots up normally. Uncommenting gives me a black VNC screen and no mouse.

EDIT - forgot the bhyve log file. all events up to and including "starting bhyve (run 1)" occur in the same second.

Code:
Aug 04  initialising
Aug 04 [loader: uefi]
Aug 04 [cpu: 24,sockets=1,cores=12,threads=2]
Aug 04 [memory: 24G]
Aug 04 [hostbridge: standard]
Aug 04 [com ports: com1]
Aug 04 [uuid: 5c412196-6697-11f0-b8c2-10ffe0bbc094]
Aug 04 [debug mode: no]
Aug 04 [primary disk: disk0.img]
Aug 04 [primary disk dev: file]
Aug 04 initialising network device tap0
Aug 04 adding tap0 -> vm-public (public addm)
Aug 04 bring up tap0 -> vm-public (public addm)
Aug 04 dynamically allocated port 5900 for vnc connections
Aug 04 booting
Aug 04 [bhyve options: -c 24,sockets=1,cores=12,threads=2 -m 24G -AHPw -l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd -U -S]
Aug 04 [bhyve devices: -s 0,hostbridge -s 31,lpc -s 4:0,nvme,/vm/win11/disk0.img -s 5:0,e1000,tap0,mac=:de -s 12:0,passthru,67/0/0 -s 12:1,passthru,67/0/1 -s 7:0,fbuf,tcp=0.0.0.0:5900,w=1920,h=1080 -s 8:0,xhci,tablet -s 9:0,hda,play=/dev/dsp0]
Aug 04 [bhyve console: -l com1,/dev/nmdm-win11.1A]
Aug 04 starting bhyve (run 1)

(here i let it sit for a few minutes in case the screen came on but no luck, so I killed the vm)

Aug 04 bhyve exited with status 143
Aug 04 destroying network device tap0
Aug 04 stopped

Am I missing something somewhere, or do I have to have the gpu passthru enabled from the start of creating a VM?
 
If you add debug="yes" to your vm-bhyve config then maybe something useful might appear in the new bhyve.log (not vm-bhyve.log) file.
Another approach is to connect to your Win11 VM via RDP instead of VNC (prior setup needed with PCI-passthru disabled) in case that works and highlights a problem with the GPU inside of Win11.
 
If you add debug="yes" to your vm-bhyve config then maybe something useful might appear in the new bhyve.log (not vm-bhyve.log) file.
Another approach is to connect to your Win11 VM via RDP instead of VNC (prior setup needed with PCI-passthru disabled) in case that works and highlights a problem with the GPU inside of Win11.

Thanks. I'm sorting out how to do that now.

So its possible that with a completely black screen (i.e. not even the "tianocore" coming up when connecting with VNC) that I'll be able to connect via rdp?
 
I wonder if you're running into a similar issue I was when trying to passthru a PCIe device to a Windows 11 VM:

So its possible that with a completely black screen (i.e. not even the "tianocore" coming up) that I'll be able to connect via rdp?
tianocore should definitely show up as that happens before the guest OS gets loaded.
 
I wonder if you're running into a similar issue I was when trying to passthru a PCIe device to a Windows 11 VM:


tianocore should definitely show up as that happens before the guest OS gets loaded.
I'm not sure its the same thing because I don't even get the "tianocore" message and sending an F8 does nothing. Though the thread was useful as a sanity check that I put the correct information in my /boot/loader.conf to make sure IOMMU was working (when I first tried passthru I got an error about something not being enabled and I needed hw.vmm.amdvi.enable="1" in the file and the error went away.

Going down rabbit holes spawning from that thread didn't yield anything as everyone seems to use intel processors.

I vaguely remember there being a command that would "pause" the vm until I could get VNC open and send an F8 just to check some things in there, but now I can't find it.
 
If you add debug="yes" to your vm-bhyve config then maybe something useful might appear in the new bhyve.log (not vm-bhyve.log) file.
Another approach is to connect to your Win11 VM via RDP instead of VNC (prior setup needed with PCI-passthru disabled) in case that works and highlights a problem with the GPU inside of Win11.

Sorry, forgot to mention this. When I add that debug line bhyve.log is a blank file.
 
I vaguely remember there being a command that would "pause" the vm until I could get VNC open and send an F8 just to check some things in there, but now I can't find it.
graphics_wait="yes"

It's mentioned at the bottom of the page.
 
graphics_wait="yes"

It's mentioned at the bottom of the page.

Thanks. It didn't give me what I was hoping though. Still have a locked black screen. No "Tianocore" message, no <del> to go into setup, nothing.

Everything I have found says I've done everything right. I'm at a loss.
 
Is it possible that the 7900XT isn't supported for passthrough?

I'm going to test this later by swapping the 7900XT with an RX580. Its the only thought I have on this.
 
Have you tried with using slot 3 instead of slot 12 ?
Also does this occur with a linux vm ?

I am able to passthrough my RX 7800 XT to an ubuntu vm without no issue.
 
Have you tried with using slot 3 instead of slot 12 ?
Also does this occur with a linux vm ?

I am able to passthrough my RX 7800 XT to an ubuntu vm without no issue.
with the case I have, I do not have the room to shoehorn it into slot 3. I might be able to if the power supply shield is removable. I might try that.

I'll have to give the linux VM a try too. I never considered that.
 
with the case I have, I do not have the room to shoehorn it into slot 3. I might be able to if the power supply shield is removable. I might try that.

I'll have to give the linux VM a try too. I never considered that.
I meant for the target slot on the VM not on the real hardware.
 
monwarez oh... no i hadn't... i didn't realize the slots were specific to certain hardware. none of the resources I found indicated that. I'll give that a go later tonight
 
Back
Top