bhyve Current state of bhyve Nvidia passthrough?

T-Aoki, I have a Tesla P4 graphics card do you know if I need these firmware image files with this card?
I cannot find Tesla P4 other than 575 series of driver, even though the product name is "Tesla". Tesla P40 can be found on older versions that still didn't have GSP firmware kmod (it first appears on 560 series for FreeBSD).

This would mean that Tesla P4 can be latest generation (not Tesla generation) of GPUs. And (not 100% sure as of too few reports) cutting edge generations of GPUs like RTX5070 seems to mandating GSP firmware to work at least currently.

So if you want to use it on FreeBSD, use -devel versions of drivers (x11/nvidia-driver-devel, and if needed, any of graphics/nvidia-drm-*-kmod-devel) and enable GSP firmware by adding hw.nvidia.registry.EnableGpuFirmware=1 in your /boot/loader.conf as described in pkg-message.

Note that quarterly branch of ports tree 2025Q2 still don't have -devel version of drivers, so you need to use main (aka latest) branch.
Note that next quarterly 2025Q3, which would be branched on early July, should have it.
 
by adding hw.nvidia.registry.EnableGpuFirmware=1 in your /boot/loader.conf as described in pkg-message.

In your opinion,adding that parameter can be useful / can fix also for fixing the CUDA error described in this post ?


I mean,this one :

but checking with cuda_check.cu failed.

Found 1 device(s).
Device: 0
Name: NVIDIA GeForce GTX 1650
Compute Capability: 7.5
Multiprocessors: 14
CUDA Cores: 896
Concurrent threads: 14336
GPU clock: 1665 MHz
Memory clock: 4001 MHz

cMemGetInfo failed with error code 201: invalid device context

1. I also checked with pytorch, the symptoms are similar to KVM vfio pci passthrough (see No process using GPU, but `CUDA error: all CUDA-capable devices are busy or unavailable`).
2. In TrueNAS Scale (probably using KVM), GPU passthrough failed until "CPU Mode" changed to "Host Mode" indicating it's related to cpu information provided by bhyve to the guest.
 
In your opinion,adding that parameter can be useful / can fix also for fixing the CUDA error described in this post ?


I mean,this one :
Not sure. I just know latest nvidia GPUs like RTX5070 requires GSP firmware to be loaded to work as "graphic" use-cases like X11 ththough several reports/complaints. But GTX1650 is Turing generation of GPU, so it should have GSP in it. So it "possibly" affects.
OTOH, it was supported by the driver versions before GSP firmware kmods are introduced. So it "possibly" does NOT affect, too.
But it would be worth testing. By testing, you can determine whether it affects or not.
Unfortunately, I don't have any computer having GTX1650 in it, thus, cannot even try by myself.
 
Not sure. I just know latest nvidia GPUs like RTX5070 requires GSP firmware to be loaded to work as "graphic" use-cases like X11 ththough several reports/complaints. But GTX1650 is Turing generation of GPU, so it should have GSP in it. So it "possibly" affects.
OTOH, it was supported by the driver versions before GSP firmware kmods are introduced. So it "possibly" does NOT affect, too.
But it would be worth testing. By testing, you can determine whether it affects or not.
Unfortunately, I don't have any computer having GTX1650 in it, thus, cannot even try by myself.

What about my nvidia card ? I have the GeForce RTX 2080 ti.
 
What about my nvidia card ? I have the GeForce RTX 2080 ti.
Check its GPU generation (architecture). GSP is incorporated Turing and later.

But even Turing and later, driver for native FreeBSD wouldn't forcibly require GSP firmware if the GPU is already supported by pre-560 series of drivers like 550 series, 535 series and so on, as GPU firmware kmod are started to be provided on FreeBSD native drivers from 560 series.
 
Check its GPU generation (architecture). GSP is incorporated Turing and later.

But even Turing and later, driver for native FreeBSD wouldn't forcibly require GSP firmware if the GPU is already supported by pre-560 series of drivers like 550 series, 535 series and so on, as GPU firmware kmod are started to be provided on FreeBSD native drivers from 560 series.

Anyway,something got broken after the nvidia driver 535. I don't understand if the thing broken belongs to the nvidia driver (so starting from 550) and / or to the CUDA libraries. From the post above :

1. I also checked with pytorch, the symptoms are similar to KVM vfio pci passthrough (see No process using GPU, but `CUDA error: all CUDA-capable devices are busy or unavailable`).

the error happens even in Linux with KVM,so it should belong more to nvidia than to FreeBSD. The doubt still remain if the error is in the driver or in the CUDA libraries or both.
Anyway,in this post :

Hi nkla,
Thanks for your reply, mine kernel is 5.4.0-52-generic;
And i according to userWarning:

So, i adding the " export CUDA_VISIBLE_DEVICES=0 " via the source gedit ~/.bashrc;
Now , it 's works fine;

import torch
torch.cuda.is_available()
True

he found a solution,but it does not work in FreeBSD. So FreeBSD is also involved,even if not directly.

What relation there is between the linux kernel (5.4.0-52-generic ?) and FreeBSD ? I mean,he says to have fixed the error by downgrading the version of the Linux kernel. But we use FreeBSD,not the Linux kernel. But we still get the same error that he fixed by downgrading the Linux kernel...

So,if the linux kernel is not on FreeBSD at all,some mechanism related to it should be inside the nvidia driver,starting from 550. And this mechanism / bug has been introduced starting from 550 ?
 
GT1030 is, IIUC, Pascal generation of GPU which doesn't have GSP (GPU System Processor) in it. And the firmware image files are only for GSP. So shouldn' affect in your case (nowhere to be transfered into GPU itself, just loaded into OS side of memory as a dummy kernel module).

GSP are incorporated in Tesla generation and later only.
Hi T-Aoki,

From my notes:
# GP104
# device id product
# 0x1b80 GP104 [GeForce GTX 1080]
# 0x1b81 GP104 [GeForce GTX 1070]
# 0x1b82 GP104 [GeForce GTX 1070 Ti]
# 0x1b83 GP104 [GeForce GTX 1060 6GB]
# 0x1b84 GP104 [GeForce GTX 1060 3GB]
# 0x1ba0 GP104 [GeForce GTX 1080 Mobile]
# 0x1ba1 GP104 [GeForce GTX 1070 Mobile]
# 0x1ba2 GP104 [GeForce GTX 1070 Mobile]
# 0x1bb0 GP104 [Quadro P5000]
# 0x1bb3 GP104 [Tesla P4]
# 0x1bb6 GP104 [Quadro P5000 Mobile]
# 0x1bb7 GP104 [Quadro P4000 Mobile]
# 0x1bb8 GP104 [Quadro P3000 Mobile]
# 0x1be0 GP104 [GeForce GTX 1080 Mobile]
# 0x1be1 GP104 [GeForce GTX 1070 Mobile]

# I found the documentation for the latest driver 550.127.05 here:
# https://us.download.nvidia.com/XFree86/FreeBSD-x86_64/550.127.05/README/index.html
# After searching through compatibility it appears the Telsa P4 isn't included there.
# Tesla P4 is indeed device ID 0x1BB3

# I found 470.256.02 driver does list the Tesla P4 There. I will try to find latest 470 series driver.
# Latest 470 driver doing a package search is 470.161.03.

If Tesla P4 is Pascal generation, then it also shouldn't be affected.
 
You should change the string inside the file x86.c otherwise Windows does not accept the nvidia GPU,reporting error 43.

You can backup some important bhyve files (vmm.ko ; bhyve*) that you have compiled for bhyve 14.2,so when you will upgrade 14.2 to 14.3,you will exchange the new bhyve file produced with the older ones. I don't think Corvin will rebase his patches for 14.3.
Hi ZioMario, do you have a comprehensive list of all the files that are required to be swapped. If possible I would create a FreeBSD virtual machine on my current system simply for doing this process and then transfer the required files to the host system.
 
I've upgraded one of my freeBSD installation from 14.2 to 14.3,so now I want to check if bhyve and the passthru work again. Stay tuned.
 
I've upgraded one of my freeBSD installation from 14.2 to 14.3,so now I want to check if bhyve and the passthru work again. Stay tuned.

Any news on this attempt?

I hope you can provide some guidance in how to proceed if you find a working solution for 14.3-RELEASE.

Otherwise, I may have to go for 15.0-CURRENT at some point before october.

Also, any insight as to why you don't think the patches will be ported to 14.3-RELEASE?
I would hope it's because the focus is on in bringing the patch to the main bhyve/freebsd source before 15.0 goes into code slush..

All the best.
 
Hello bro.

First of all,let's say you are on FreeBSD 14.2 : apply all the corvin patches and modify the KVM string. Then make a backup of the bhyve executable stored on /usr/sbin and vmm.ko stored on /boot/modules. At this point,upgrade 14.2 to 14.3. Probably the upgrade will place a new bhyve and vmm.ko files,overwriting the older ones. So you will lose the old files that were able to passthru your GPU. So,you should rename these files and copy the "cracked / older " files to the same places.
 
Ok, it seems that, at least for me, it's necessary to patch also the `sys/amd/vmm/x86.c` with the usual 'kvmkvm' stuff. Once rebuilt the kernel, I was able to access the GPU with Nvidia 570 server drivers (I want to reduce that to headless, as I need the VM for CUDA only):

Code:
root@baraddur:/home/sysadmin# nvidia-smi
Sun Mar 16 17:16:03 2025      
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.86.15              Driver Version: 570.86.15      CUDA Version: 12.8     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 4080 ...    Off |   00000000:00:02.0 Off |                  N/A |
| 32%   31C    P0             37W /  320W |       1MiB /  16376MiB |      3%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                        
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+

A simple cuda test is working to, from `https://gist.github.com/f0k/0d6431e3faa60bffc788f8b4daa029b1`:

Code:
root@baraddur:/home/sysadmin# vim cuda_check.c
root@baraddur:/home/sysadmin# nvcc -o cuda_check cuda_check.c -lcuda
root@baraddur:/home/sysadmin# ./cuda_check
Found 1 device(s).
Device: 0
  Name: NVIDIA GeForce RTX 4080 SUPER
  Compute Capability: 8.9
  Multiprocessors: 80
  CUDA Cores: 10240
  Concurrent threads: 122880
  GPU clock: 2550 MHz
  Memory clock: 11501 MHz
  Total Memory: 15954 MiB
  Free Memory: 15700 MiB

Some other test is working too, so next step is to create a proper shaped machine and tryout python frameworks.

Anyway, at this point I'm not really sure if the above infos about using Nvidia drivers without 'kdmkdm' are accurate.
Thank you

I am having the exact same error you described in your previous post:
[ 6.316807] nvidia 0000:00:06.0: can't derive routing for PCI INT A
[ 6.317769] nvidia 0000:00:06.0: PCI INT A: no GSI - using ISA IRQ 10
[ 6.648282] nvidia-modeset: Loading NVIDIA Kernel Mode Setting Driver for UNIX platforms 570.133.07 Fri Mar 14 12:42:57 UTC 2025
[ 6.679465] [drm] [nvidia-drm] [GPU ID 0x00000006] Loading driver
[ 13.947002] [drm:nv_drm_load [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000006] Failed to allocate NvKmsKapiDevice
[ 13.947785] [drm:nv_drm_register_drm_device [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000006] Failed to register device

I have used the code from https://github.com/Beckhoff/freebsd-src/tree/phab/corvink/14.2/nvidia-wip both unchanged and changed by applying the patch file after removing all code changes for usr.sbin/bhyve/pci_passthru.c since those changes are already included in the github code above. I get this error in both cases and cannot get it to work. Could you possibly share exactly how you got it working?
 
Hey group. Have been following this thread for some time. Is it possible that it is the way GPU drivers interface - ‘talk to’ - base system kernel. If this makes the community feel any better. Linux on WSL and I am pretty sure Linux base system GPU passthrough is not robust. Stdout logging is subpar. Setting up CUDA to run on GPU compute isn’t a walk in the park either unfortunately. I need to clean up this notebook 👀 - ML/DL in the cloud
 
Linux on WSL and I am pretty sure Linux base system GPU passthrough is not robust.
Historically, passing through devices (not limited with GPUs) is fragile.
If any of "resources" such as interrupts, physical memory ranges, I/O ports,... are NOT properly masked on initializing base OS, it easily fails.

I think this is because 3rd party VMs like VirtualBox, Qemu,... emulates specific device and call underlying base OS, IMHO.

What I think mandatory to sanely allocating specific devices for specific OS is to make all hardware resources fully virtualized by UEFI firmware (including additional firmwares on video cards and so on) and forcing to call UEFI runtime service, and admins configure which resources are shared / which resouces are occupied on UEFI firmware configs.
 
Could someone provide a working bhyve command with the patches applied. I would like to rule out if I am doing something wrong in my bhyve command. This is the bhyve command I have below:

bhyve -c 4 -m 8192M -H -A -P -S -W -w \
-s 0,hostbridge \
-s 4,ahci-hd,/dev/zvol/zroot/vm/ubuntu/disk0 \
-s 5,virtio-net,tap0 \
-s 6,passthru,66/0/0 \
-s 31,lpc -l com1,stdio \
-l bootrom,./linuxguest_VARS.fd \
ubuntu-24-04-2
 
I have a Tesla T4 and I followed the instructions in https://dflund.se/~getz/Notes/2024/freebsd-gpu/. I was able to run nvidia-smi in my VM,
Code:
root@debian12:~# nvidia-smi
Mon Jul 21 03:42:08 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.35.05              Driver Version: 560.35.05      CUDA Version: 12.6     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  Tesla T4                       On  |   00000000:00:08.0 Off |                    0 |
| N/A   46C    P0             27W /   70W |       1MiB /  15360MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   1  Tesla T4                       On  |   00000000:00:09.0 Off |                    0 |
| N/A   47C    P0             28W /   70W |       1MiB /  15360MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   2  Tesla T4                       On  |   00000000:00:0A.0 Off |                    0 |
| N/A   45C    P0             26W /   70W |       1MiB /  15360MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+
but cuda_check failed with invalid device context,
Code:
root@debian12:~# ./cuda_check
Found 3 device(s).
Device: 0
  Name: Tesla T4
  Compute Capability: 7.5
  Multiprocessors: 40
  CUDA Cores: 2560
  Concurrent threads: 40960
  GPU clock: 1590 MHz
  Memory clock: 5001 MHz
  cMemGetInfo failed with error code 201: invalid device context
Device: 1
  Name: Tesla T4
  Compute Capability: 7.5
  Multiprocessors: 40
  CUDA Cores: 2560
  Concurrent threads: 40960
  GPU clock: 1590 MHz
  Memory clock: 5001 MHz
  cMemGetInfo failed with error code 201: invalid device context
Device: 2
  Name: Tesla T4
  Compute Capability: 7.5
  Multiprocessors: 40
  CUDA Cores: 2560
  Concurrent threads: 40960
  GPU clock: 1590 MHz
  Memory clock: 5001 MHz
  cMemGetInfo failed with error code 201: invalid device context
It is the same with or without Corwin's 14.2 patch. For those who have better luck could you please tell me which Linux distro and version and which Nvidia packages you installed in your bhyve? I scoured for clues and tried many different things but really hit a brick wall, any help would be greatly appreciated.
 
still works on 14.3 on my 3090, the patch for 14.2 applied cleanly. I've heard about issues for nvidia professional cards that a second pci device isn't there on boot. I run debian in my vm and upgrade was smooth
 
still works on 14.3 on my 3090, the patch for 14.2 applied cleanly. I've heard about issues for nvidia professional cards that a second pci device isn't there on boot. I run debian in my vm and upgrade was smooth
Could you run any CUDA program, e.g., cuda_check?
 
Could you run any CUDA program, e.g., cuda_check?
Assuming this is the cuda_check program https://gist.github.com/f0k/0d6431e3faa60bffc788f8b4daa029b1

Code:
Found 1 device(s).
Device: 0
  Name: NVIDIA GeForce RTX 3090
  Compute Capability: 8.6
  Multiprocessors: 82
  CUDA Cores: 10496
  Concurrent threads: 125952
  GPU clock: 1695 MHz
  Memory clock: 9751 MHz
  Total Memory: 24259 MiB
  Free Memory: 4104 MiB

I just checked out the 14.3 branch, applied the patch and compiled/installed.

Are you sure you're passing through the entire card? I've heard about this issue before for the pro cards.
 
Assuming this is the cuda_check program https://gist.github.com/f0k/0d6431e3faa60bffc788f8b4daa029b1

Code:
Found 1 device(s).
Device: 0
  Name: NVIDIA GeForce RTX 3090
  Compute Capability: 8.6
  Multiprocessors: 82
  CUDA Cores: 10496
  Concurrent threads: 125952
  GPU clock: 1695 MHz
  Memory clock: 9751 MHz
  Total Memory: 24259 MiB
  Free Memory: 4104 MiB

I just checked out the 14.3 branch, applied the patch and compiled/installed.

Are you sure you're passing through the entire card? I've heard about this issue before for the pro cards.
Thanks for confirming! I passed the cuda_check after switching the loader from "grub" to "uefi" (14.3-release with patch apllied). I didn't notice the loader does matter.

EDIT: Info about the Linux guest: OS: Ubuntu 22.04.4, nvidia-driver version: 575.64.03, GPU: NVIDIA GeForce GTX 1650.
 
Last edited:
Just popping in with my experience.

I'm using FreeBSD 14.2 to run a Ubuntu virtual machine and have had success with the following:

Used the patch
Code:
# cd /usr/
# rm -rf /usr/src
# git clone https://github.com/beckhoff/freebsd-src /usr/src
# cd /usr/src
# git checkout -f origin/phab/corvink/14.2/nvidia-wip
# cd /usr/src/usr.sbin/bhyve
# make && make install

I installed the following. Don't know if they're all needed, but I didn't get any conflicts

Code:
# pkg install bhyve-firmware  edk2-bhyve grub2-bhyve vm-bhyve-devel

The devices I wanted to pass through were the GPU and Mellanox card with the following pciconf info:
Code:
ppt0@pci0:7:0:0:        class=0x030000 rev=0xa1 hdr=0x00 vendor=0x10de device=0x1bb1 subvendor=0x10de subdevice=0x11a3
    vendor     = 'NVIDIA Corporation'
    device     = 'GP104GL [Quadro P4000]'
    class      = display
    subclass   = VGA
ppt1@pci0:7:0:1:        class=0x040300 rev=0xa1 hdr=0x00 vendor=0x10de device=0x10f0 subvendor=0x10de subdevice=0x11a3
    vendor     = 'NVIDIA Corporation


ppt2@pci0:145:0:0:      class=0x020700 rev=0x00 hdr=0x00 vendor=0x15b3 device=0x1011 subvendor=0x15b3 subdevice=0x0179
    vendor     = 'Mellanox Technologies'
    device     = 'MT27600 [Connect-IB]'
    class      = network
    subclass   = InfiniBand


I set them to pptdevs for passthru using /boot/loader.conf :
Code:
kern.geom.label.disk_ident.enable="0"
kern.geom.label.gptid.enable="0"
cryptodev_load="YES"
zfs_load="YES"
vmm_load="YES"
hw.vmm.enable_vtd=1

pptdevs="145/0/0 7/0/0 7/0/1"

and for the VM I have the following config file:
Code:
loader="grub"
grub_run_partition="gpt2"
grub_run_dir="/grub"

cpu=8
custom_args="-p 4 -p 6 -p 8 -p 10 -p 12 -p 14 -p 16 -p 18"

memory=8192M
wired_memory=yes

network0_type="virtio-net"
network0_switch="public"

disk0_dev="custom"
disk0_type="ahci-hd"
disk0_name="/dev/zvol/zroot/ubuntu_vm_disk"

passthru0="7/0/0"
passthru1="7/0/1"
passthru2="145/0/0"

pptdevs="msi=on"

uuid="38c6aa07-12c7-11f0-8e5c-0894ef4d85e6"
network0_mac="58:9c:fc:0d:bb:8a"


Inside of the Ubuntu virtual machine I installed nvidida-535-drivers, rebooted and got the following:

Code:
jholloway@ubuntuvm:~$ nvidia-smi
Mon Apr  7 16:16:51 2025      
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.183.01             Driver Version: 535.183.01   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  Quadro P4000                   Off | 00000000:00:06.0 Off |                  N/A |
| 46%   35C    P8               5W / 105W |      4MiB /  8192MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                        
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+

So far it is working well. Plex running on the VM detects the card and is performing hardware transcoding as expected. It will even show the transcoding in nvidia-smi if I'm watching something on Plex at the time!

I have encountered one bug it seems where the VM didn't shutdown properly or something and the nvidia driver was unable to find the card. Even turning the VM off and on via the host didn't solve the problem. Fortunately when I rebooted the host and started the VM up again the problem had solved itself. I'm not sure what caused this bug as I have been unable to reproduce it, but I suspect it has something to do with either the Ubuntu VM being reset from either inside the VM, or by being shutoff from the command vm restart ubuntu_vm.

But aside from that hiccup, both the GPU passthrough and the Mellanox passthrough are working well.
I use few different configs - once one worked, now dont, very strange.
Used your now - worked fine but nvidia-smi was showing no Devices found, so i had to change bhyve bhyve to KVMKVMKVM\0\0\0 - now nvidia-smi works.
P.s. i use debian 12
P.p.s. need to try now ollama to see if all is fine
 

Attachments

  • nvidia.png
    nvidia.png
    16.6 KB · Views: 119
Why Quadro P4000 users seem to have the needing to use :

Code:
hw.vmm.enable_vtd=1

for activating the passthru of the GPU inside a Linux vm (maybe even inside a Windows vm ?) while Geforce RTX * does not need to use that parameter ? I have RTX 2080 ti,never used that and it works everything like a charme.
 
Why Quadro P4000 users seem to have the needing to use :

Code:
hw.vmm.enable_vtd=1

for activating the passthru of the GPU inside a Linux vm (maybe even inside a Windows vm ?) while Geforce RTX * does not need to use that parameter ? I have RTX 2080 ti,never used that and it works everything like a charme.
Not only that, but where in the man pages are these loader tunables listed? I searched in vmm man page, loader man page and others. The only tunable listed on the vmm man page is below:
hw.vmm.maxcpu
Maximum number of virtual CPUs. The default is the number of
physical CPUs in the system.
 
Could someone provide a working bhyve command with the patches applied. I would like to rule out if I am doing something wrong in my bhyve command. This is the bhyve command I have below:

bhyve -c 4 -m 8192M -H -A -P -S -W -w \
-s 0,hostbridge \
-s 4,ahci-hd,/dev/zvol/zroot/vm/ubuntu/disk0 \
-s 5,virtio-net,tap0 \
-s 6,passthru,66/0/0 \
-s 31,lpc -l com1,stdio \
-l bootrom,./linuxguest_VARS.fd \
ubuntu-24-04-2
Hello!
The GPUs that i have AMD and NVidia, both have 2 PCI devices, One is the GPU and the other is the related audio device.
a) make shure you have 2 ppt devices
b) you may want to include your GPU UEFI BIOS ROM
Bash:
# pciconf -lbcevV
ppt0@pci0:2:0:0:        class=0x030000 rev=0xa1 hdr=0x00 vendor=0x10de device=0x1d01 subvendor=0x1462 subdevice=0x8c98
    vendor     = 'NVIDIA Corporation'
    device     = 'GP108 [GeForce GT 1030]'
    class      = display
<--snip>

ppt1@pci0:2:0:1:        class=0x040300 rev=0xa1 hdr=0x00 vendor=0x10de device=0x0fb8 subvendor=0x1462 subdevice=0x8c98
    vendor     = 'NVIDIA Corporation'
    device     = 'GP108 High Definition Audio Controller'
    class      = multimedia
<--snip>

Here's my bhyve command

Bash:
/usr/sbin/bhyve -A -H \
-c 8 -m 16G -w \
-s 31:0,lpc -l bootrom,/usr/share/bhyve/firmware/BHYVE.fd,fwcfg=qemu \
-s 0,hostbridge,model=i440fx  \
-s 4:0,nvme,/dev/zvol/dsk/rpool1/zones/ubuntu \
-s 6:0,virtio-net,tap0 \
-s 30:0,fbuf,tcp=0.0.0.0:5905,w=1920,h=1080 \
-s 30:1,xhci,tablet \
-S \
-s 9:0,passthru,2/0/0,rom=/home2/vm/GP108.rom \
-s 9:1,passthru,2/0/1 \
testvm
 
Back
Top