Hello.
I've configured Xubuntu 22.04 so that it can allow me to pass thru my nvidia gpu on the guest os vms. This time I've chosen freebsd 13.1 as a guest,because I was curious to see if bhyve supports the passthrough of my nvidia gpu within another guest os (Puppy Linux) that I have virtualized with bhyve.
This is how I have configured the FreeBSD 13.1 guest vm :
/boot/loader.conf
pciconf -vl says that pptdevs reserved correctly the addresses 8/0/0 9/0/0 10/0/0 11/0/0 which belong to the nvidia GPU :
At this point,inside FreeBSD 13.1 guest OS,I tried to virtualize another OS,the puppy Linux :
unfortunately it gives this error :
bhyve: PCI device at 8/0/0 is not using the ppt(4) driver
device emulation initialization error: Device busy
TigerVNC Viewer 64-bit v1.12.0
Built on: 2022-09-20 22:40
Copyright (C) 1999-2021 TigerVNC Team and many others (see README.rst)
See https://www.tigervnc.org for information on TigerVNC.
Tue Sep 27 01:19:56 2022
DecodeManager: Detected 8 CPU core(s)
DecodeManager: Creating 4 decoder thread(s)
CConn: unable to connect to socket: Connection refused (61)
I tried to remove the slots related to the passthru of the gpu and puppy booted like a charm,like this one :
So,where could be the error in this specific scenario ? Nested vm works,but I can't pass through the gpu,even if the host OS (xubuntu) makes it available to the guest.
I've configured Xubuntu 22.04 so that it can allow me to pass thru my nvidia gpu on the guest os vms. This time I've chosen freebsd 13.1 as a guest,because I was curious to see if bhyve supports the passthrough of my nvidia gpu within another guest os (Puppy Linux) that I have virtualized with bhyve.
This is how I have configured the FreeBSD 13.1 guest vm :
/boot/loader.conf
Code:
vmm_load="YES"
nmdm_load="YES"
if_tap_load="YES"
if_bridge_load="YES"
bridgestp_load="YES"
kern.geom.label.disk_ident.enable="0"
kern.geom.label.gptid.enable="0"
kern.racct.enable=1
aio_load="YES"
cryptodev_load="YES"
zfs_load="YES"
verbose_loading="YES"
pptdevs="8/0/0 9/0/0 10/0/0 11/0/0"
pciconf -vl says that pptdevs reserved correctly the addresses 8/0/0 9/0/0 10/0/0 11/0/0 which belong to the nvidia GPU :
Code:
ppt0@pci0:8:0:0: class=0x030000 rev=0xa1 hdr=0x00 vendor=0x10de device=0x1e04 subvendor=0x19da subdevice=0x2503
vendor = 'NVIDIA Corporation'
device = 'TU102 [GeForce RTX 2080 Ti]'
class = display
subclass = VGA
ppt1@pci0:9:0:0: class=0x040300 rev=0xa1 hdr=0x00 vendor=0x10de device=0x10f7 subvendor=0x19da subdevice=0x2503
vendor = 'NVIDIA Corporation'
device = 'TU102 High Definition Audio Controller'
class = multimedia
subclass = HDA
ppt2@pci0:10:0:0: class=0x0c0330 rev=0xa1 hdr=0x00 vendor=0x10de device=0x1ad6 subvendor=0x19da subdevice=0x2503
vendor = 'NVIDIA Corporation'
device = 'TU102 USB 3.1 Host Controller'
class = serial bus
subclass = USB
ppt3@pci0:11:0:0: class=0x0c8000 rev=0xa1 hdr=0x00 vendor=0x10de device=0x1ad7 subvendor=0x19da subdevice=0x2503
vendor = 'NVIDIA Corporation'
device = 'TU102 USB Type-C UCSI Controller'
At this point,inside FreeBSD 13.1 guest OS,I tried to virtualize another OS,the puppy Linux :
Code:
bhyve -S -c sockets=1,cores=1,threads=1 -m 2G -w -H -A \
-s 0,hostbridge \
-s 1,ahci-cd,/home/marietto/Desktop/bhyve/Files/fossapup64-9.5.iso,bootindex=1 \
-s 2,virtio-blk,/home/marietto/Desktop/bhyve/Files/puppy.img,bootindex=2 \
-s 8:0,passthru,8/0/0,rom=TU102.rom \
-s 8:1,passthru,9/0/0 \
-s 8:2,passthru,10/0/0 \
-s 8:3,passthru,11/0/0 \
-s 10,virtio-net,tap18 \
-s 11,virtio-9p,sharename=/ \
-s 29,fbuf,tcp=0.0.0.0:5918,w=800,h=600,wait \
-s 30,xhci,tablet \
-s 31,lpc \
-l bootrom,/usr/local/share/uefi-firmware/BHYVE_BHF_CODE.fd \
vm0:18 < /dev/null & sleep 2 && vncviewer 0:18
unfortunately it gives this error :
bhyve: PCI device at 8/0/0 is not using the ppt(4) driver
device emulation initialization error: Device busy
TigerVNC Viewer 64-bit v1.12.0
Built on: 2022-09-20 22:40
Copyright (C) 1999-2021 TigerVNC Team and many others (see README.rst)
See https://www.tigervnc.org for information on TigerVNC.
Tue Sep 27 01:19:56 2022
DecodeManager: Detected 8 CPU core(s)
DecodeManager: Creating 4 decoder thread(s)
CConn: unable to connect to socket: Connection refused (61)
I tried to remove the slots related to the passthru of the gpu and puppy booted like a charm,like this one :
Code:
bhyve -S -c sockets=1,cores=1,threads=1 -m 2G -w -H -A \
-s 0,hostbridge \
-s 1,ahci-cd,/home/marietto/Desktop/bhyve/Files/fossapup64-9.5.iso,bootindex=1 \
-s 2,virtio-blk,/home/marietto/Desktop/bhyve/Files/puppy.img,bootindex=2 \
-s 10,virtio-net,tap18 \
-s 11,virtio-9p,sharename=/ \
-s 29,fbuf,tcp=0.0.0.0:5918,w=800,h=600,wait \
-s 30,xhci,tablet \
-s 31,lpc \
-l bootrom,/usr/local/share/uefi-firmware/BHYVE_BHF_CODE.fd \
vm0:18 < /dev/null & sleep 2 && vncviewer 0:18
So,where could be the error in this specific scenario ? Nested vm works,but I can't pass through the gpu,even if the host OS (xubuntu) makes it available to the guest.