bhyve My nvidia gpu can't be passed through from Linux (qemu+kvm) to the 2. guest OS (Puppy Linux) that I tried to virtualize on the 1. (FreeBSD 13.1)

Hello.

I've configured Xubuntu 22.04 so that it can allow me to pass thru my nvidia gpu on the guest os vms. This time I've chosen freebsd 13.1 as a guest,because I was curious to see if bhyve supports the passthrough of my nvidia gpu within another guest os (Puppy Linux) that I have virtualized with bhyve.

This is how I have configured the FreeBSD 13.1 guest vm :

/boot/loader.conf

Code:
vmm_load="YES"
nmdm_load="YES"
if_tap_load="YES"
if_bridge_load="YES"
bridgestp_load="YES"
kern.geom.label.disk_ident.enable="0"
kern.geom.label.gptid.enable="0"
kern.racct.enable=1
aio_load="YES"
cryptodev_load="YES"
zfs_load="YES"
verbose_loading="YES"
pptdevs="8/0/0 9/0/0 10/0/0 11/0/0"


pciconf -vl says that pptdevs reserved correctly the addresses 8/0/0 9/0/0 10/0/0 11/0/0 which belong to the nvidia GPU :

Code:
ppt0@pci0:8:0:0:    class=0x030000 rev=0xa1 hdr=0x00 vendor=0x10de device=0x1e04 subvendor=0x19da subdevice=0x2503
    vendor     = 'NVIDIA Corporation'
    device     = 'TU102 [GeForce RTX 2080 Ti]'
    class      = display
    subclass   = VGA

ppt1@pci0:9:0:0:    class=0x040300 rev=0xa1 hdr=0x00 vendor=0x10de device=0x10f7 subvendor=0x19da subdevice=0x2503
    vendor     = 'NVIDIA Corporation'
    device     = 'TU102 High Definition Audio Controller'
    class      = multimedia
    subclass   = HDA

ppt2@pci0:10:0:0:    class=0x0c0330 rev=0xa1 hdr=0x00 vendor=0x10de device=0x1ad6 subvendor=0x19da subdevice=0x2503
    vendor     = 'NVIDIA Corporation'
    device     = 'TU102 USB 3.1 Host Controller'
    class      = serial bus
    subclass   = USB

ppt3@pci0:11:0:0:    class=0x0c8000 rev=0xa1 hdr=0x00 vendor=0x10de device=0x1ad7 subvendor=0x19da subdevice=0x2503
    vendor     = 'NVIDIA Corporation'
    device     = 'TU102 USB Type-C UCSI Controller'

At this point,inside FreeBSD 13.1 guest OS,I tried to virtualize another OS,the puppy Linux :

Code:
bhyve -S -c sockets=1,cores=1,threads=1 -m 2G -w -H -A \
-s 0,hostbridge \
-s 1,ahci-cd,/home/marietto/Desktop/bhyve/Files/fossapup64-9.5.iso,bootindex=1 \
-s 2,virtio-blk,/home/marietto/Desktop/bhyve/Files/puppy.img,bootindex=2 \
-s 8:0,passthru,8/0/0,rom=TU102.rom \
-s 8:1,passthru,9/0/0 \
-s 8:2,passthru,10/0/0 \
-s 8:3,passthru,11/0/0 \
-s 10,virtio-net,tap18 \
-s 11,virtio-9p,sharename=/ \
-s 29,fbuf,tcp=0.0.0.0:5918,w=800,h=600,wait \
-s 30,xhci,tablet \
-s 31,lpc \
-l bootrom,/usr/local/share/uefi-firmware/BHYVE_BHF_CODE.fd \
vm0:18 < /dev/null & sleep 2 && vncviewer 0:18

unfortunately it gives this error :

bhyve: PCI device at 8/0/0 is not using the ppt(4) driver
device emulation initialization error: Device busy

TigerVNC Viewer 64-bit v1.12.0
Built on: 2022-09-20 22:40
Copyright (C) 1999-2021 TigerVNC Team and many others (see README.rst)
See https://www.tigervnc.org for information on TigerVNC.

Tue Sep 27 01:19:56 2022
DecodeManager: Detected 8 CPU core(s)
DecodeManager: Creating 4 decoder thread(s)
CConn: unable to connect to socket: Connection refused (61)

I tried to remove the slots related to the passthru of the gpu and puppy booted like a charm,like this one :

Code:
bhyve -S -c sockets=1,cores=1,threads=1 -m 2G -w -H -A \
-s 0,hostbridge \
-s 1,ahci-cd,/home/marietto/Desktop/bhyve/Files/fossapup64-9.5.iso,bootindex=1 \
-s 2,virtio-blk,/home/marietto/Desktop/bhyve/Files/puppy.img,bootindex=2 \
-s 10,virtio-net,tap18 \
-s 11,virtio-9p,sharename=/ \
-s 29,fbuf,tcp=0.0.0.0:5918,w=800,h=600,wait \
-s 30,xhci,tablet \
-s 31,lpc \
-l bootrom,/usr/local/share/uefi-firmware/BHYVE_BHF_CODE.fd \
vm0:18 < /dev/null & sleep 2 && vncviewer 0:18

So,where could be the error in this specific scenario ? Nested vm works,but I can't pass through the gpu,even if the host OS (xubuntu) makes it available to the guest.
 
So which OS is host and how deep virtualisation go? I could get something wrong but are those nested VM’s?
Sorry if I misunderstood something but its not clear to me.
 
Host os = xubuntu 22.04 : in this OS I can see the 4 slots of the nvidia GPU

Code:
02:00.0 VGA compatible controller: NVIDIA Corporation TU102 [GeForce RTX 2080 Ti] (rev a1)
02:00.1 Audio device: NVIDIA Corporation TU102 High Definition Audio Controller (rev a1)
02:00.2 USB controller: NVIDIA Corporation TU102 USB 3.1 Host Controller (rev a1)
02:00.3 Serial bus controller: NVIDIA Corporation TU102 USB Type-C UCSI Controller (rev a1)

and I have enabled the nested vm,using the following parameters :

Code:
nano /etc/modprobe.d/vfio.conf

options kvm ignore_msrs=1 report_ignored_msrs=0
options kvm-intel nested=y ept=y

guest os 1 = FreeBSD 13.1. Here I have used this /boot/loader.conf :

Code:
vmm_load="YES"
nmdm_load="YES"
if_tap_load="YES"
if_bridge_load="YES"
bridgestp_load="YES"
kern.geom.label.disk_ident.enable="0"
kern.geom.label.gptid.enable="0"
kern.racct.enable=1
aio_load="YES"
cryptodev_load="YES"
zfs_load="YES"
verbose_loading="YES"
pptdevs="8/0/0 9/0/0 10/0/0 11/0/0"

(I have used 8/0/0,9/0/0,10/0/0,11/0/0 because FreeBSD re-mapped the previous address with these values)

guest os 2 = puppy linux. this os is invoked by bhyve installed on the guest os n. 1 using these parameters :

Code:
bhyve -S -c sockets=1,cores=1,threads=1 -m 2G -w -H -A \
-s 0,hostbridge \
-s 1,ahci-cd,/home/marietto/Desktop/bhyve/Files/fossapup64-9.5.iso,bootindex=1 \
-s 2,virtio-blk,/home/marietto/Desktop/bhyve/Files/puppy.img,bootindex=2 \
-s 8:0,passthru,8/0/0,rom=TU102.rom \
-s 8:1,passthru,9/0/0 \
-s 8:2,passthru,10/0/0 \
-s 8:3,passthru,11/0/0 \
-s 10,virtio-net,tap18 \
-s 11,virtio-9p,sharename=/ \
-s 29,fbuf,tcp=0.0.0.0:5918,w=800,h=600,wait \
-s 30,xhci,tablet \
-s 31,lpc \
-l bootrom,/usr/local/share/uefi-firmware/BHYVE_BHF_CODE.fd \
vm0:18 < /dev/null & sleep 2 && vncviewer 0:18
 
If GPU passthrough doesn't work with bhyve(8) running on iron, it's not going to magically work as a nested VM either.
 
GPU passthrough on bhyve runs great on bare metal on both the gpus,nvidia and amd. Nvidia only for Linux,amd for Linux and Windows.
 
My other question is... what is your use case?
Last time I used gpu passthrough on my i7 ivybridge performance was more than 15% loss.
I can't imagine practical use for nested vm's without huge overhead, and most likely bhyve does not support nested passthrough, maybe kvm does.
 
My other question is... what is your use case?
Last time I used gpu passthrough on my i7 ivybridge performance was more than 15% loss.
I can't imagine practical use for nested vm's without huge overhead, and most likely bhyve does not support nested passthrough, maybe kvm does.

I have one Intel I9 CPU with 12 cores and Puppy is one of the lightest Linux distro all around. It can be used. Anyway,it is only an experiment. KVM supports nested vms,for this reason I've chosen Linux as host os.
 
Hi Ziomario, did you have put your gpu host side on blacklist, and in the Bios setup, set the Primary Display to IGFX ,CPU graphics / onboard graphics? like hypervisor you used QEMU-KVM? didn't it? have you used libvirt-Bhyve for your nested guest?like SirDice yet explained GPU VGA passthrough isn't possible with Bhyve as nested virtualization as just only on bare metal. Your configuration look good, and if you want best performance you can use virtio, VNC and Tmux.Maybe like have said Tyson, you can try to use a linux distro like guest VM and with qemu-kvm, launch a nested guest FreeBSD;)tryng a pass-through there...I'm trying too with differents settings inside an hypervisor type II and Qemu-KVM to test Docker inside a nested VM...Only the brave!!! 😅like the motto said.Good Luck🤞 its an hard work ⚒
Maybe this is helpful for you:
PCI passthrough via OVMF very well explained
 
On the Linux side I have configured everything correctly,because I have virtualized freebsd as guest os. So,there could not be errors on that side. Why are you talking about bhyve nested virtualization ? I'm not using FreeBSD as host and Linux as guest or FreeBSD as host and another FreeBSD as guest. This would be a bhyve nested virtualization. In this scenario the nested virtualization come from Linux and I have enabled it. So,are you telling that I could try this combination :

Host os = xubuntu 22.04
guest os 1 = Puppy Linux
guest os 2 = FreeBSD 13.1

?

maybe I will try this even if it is not what I want to do.
 
Sure, if you like, you could try and left bhyve cause can't passthru host hardwares or just is a freebsd than can't expose its system to a nested virtualization,
Linux system can with KVM and Qemu like Hypervisor,I don't if is possible but for example in my case I could enable HVM inside a nested OS System through the guest Hypervisor Qemu. So I have in the "primary" system , the host ,an Hypervisor with a guest system , a Linux distro and second Hypervisor QEmu with libvirt and OVMF can launch a nested guest OS, based on BSD, and finally inside it I can use Docker-Desktop or Just Docker and docker-compose through minikube and kubernetes cluster launch a docker virtual appliance.

I talked about bhyve nested virtualization because that's the name. if you had launch freeBSD like Guest in a Host with a Linux kernel ,with QEmu-KVM like paravirtualization,and inside FreeBSD, you used its hypervisor to launch Puppy linux, as final System, a nested VM system, and Bhyve like nested hypervisor.Isn't this your case ? How did you have explain clearly above?

  1. Then need to do another consideration: I'm not sure, but it seems to me, that need to have almost two GPU to passthrou one to a guest and another reserve to host for example an Intel UHD integrated chipset and an Nvidia.
  2. Enable first one in the Bios and second one need to Blacklist in host side, enable KVM and create with QEmu a VM and passthrou GPU graphic and audio,also USB 3.1 if are in the same IOMMU group.
  3. So you need a processor with VT-x ,VT-d, and EPT and enable it in the Bios.
  4. Before to enable passthrough need to launch QEmu VM without it. After you have configured the guest System you can enable it and reboot.
  5. So it is better to use libvirt and Virtual-Machine Manager, simplifying the process.
  6. the guest VM should be another linux distro with QEmu or Windows with hyper-V and finally create a nested VM inside it, with FreeBSD
  7. So it better Have 16 GB of RAM ,and need to use a VNC, tigervnc or Spice ,the first time ,and Remote Desktop connection to connect to VM and Xrdp inside FreeBSD if you use Windows, or Simply repeating with a vnc client guest side, and vnc server "nested" guest side,if you prefer to use again a linux distro.
  8. Well I'll suggest to you to use ArchLinux and Ubuntu ,and FreeBSD.Or Ubuntu ,Windows and FreeBSD.no Puppy Linux. and freeBSD didn't need to have a graphical interface GUI, just a console is more than you need and there are differents VMs, yet ready to use, on freeBSD site.
  9. freeBSD VMs disks
vhd,vmdk,raw or qcow2: pam enabled, just access with root login
Good job;)
P.S: just give a look to the link I put above, in my answer ,where is explained passthrough, KVM, and iommu by ArchWiki.

IOMMU (aka VT-d) need to be enabled for all this stuff to work properly.

I found two others links that explains how to manage VMs,with libvirt-VMM:
Configuring Virtual Machines with Virsh and a tutorial By Red Hat, KVM Paravirtualized (virtio) Drivers, look Red Hat is the owner of this software
and this is a little explanation about it work.
 
You are right. If I do :

xubuntu --> qemu kvm + nesting enabled --> freebsd guest --> bhyve --> nvidia gpu --> puppy linux

I'm asking to bhyve to act as a nested hypervisor. The fact that the nesting option should be enabled on the host os means that the nested OS comes into play starting from the 2 level. If bhyve supported nesting,I should have activated the option even on Freebsd and it would have been valid for the guest os at level 3. In any case,bhyve acts as nested hypervisor and I know for sure that bhyve does not support nesting. Anyway it has an impact to the pass thru of the devices only,because puppy linux booted correctly as nested os invoked by bhyve as nested hypervisor. Thanks for your clarifications about how to passthrough a device on a Linux os with qemu + kvm,but I know very well how to do that. So well that I'm working on the creation of a customized linux distro that can passthu an invidia gpu or more,out of the box,without configuring anything,using a tool called cubic.
 
Back
Top