Solved Trying to get GTX 980 working with Xorg

Oh sorry, somewhere in the back of my mind I read VirtualBox. Does not apply.
Make sure the GPU is not used by the host.

For the future:

Furthermore:
The gpu is not in use by the host - I have two graphics cards and the one in the virtual machine is not accessible by the host
However, this is not a vgpu either, I am not splitting up a gpu - i'm passing one full gpu into the vm as I have two graphics cards
 
The gpu passthrough isn't the issue so there's not much point going into it, the gpu is passed into the virtual machine correctly, as I used the same procedure of adding it to my windows virtual machine, which works a-ok with the gpu
The issue, atleast I think, is that there's now a VM virtual gpu driver and the passthrough GPU in the vm and freebsd doesn't know which one to choose although I'm hesitant to say that, that's the actual issue as it may be something else

I did use a similar method as shown in those links to setup gpu passthrough although I did it specific for arch linux rather than ubuntu or centOS - the gpu passthrough works however as the 980 is recognised in the VM
 
The gpu passthrough isn't the issue so there's not much point going into it, the gpu is passed into the virtual machine correctly, as I used the same procedure of adding it to my windows virtual machine, which works a-ok with the gpu
The issue, atleast I think, is that there's now a VM virtual gpu driver and the passthrough GPU in the vm and freebsd doesn't know which one to choose although I'm hesitant to say that, that's the actual issue as it may be something else
From your log in message #24, the first error that I see:
Code:
[    23.394] (II) NVIDIA dlloader X Driver  535.104.05  Sat Aug 19 00:40:33 UTC 2023
[    23.394] (II) NVIDIA Unified Driver for all Supported NVIDIA GPUs
[    23.394] (--) Using syscons driver with X support (version 2.0)
[    23.394] (--) using VT number 9

[    23.403] (EE) No devices detected.

Comparing with another NVidia Xorg log (running on bare metal):
Code:
[    84.082] (II) NVIDIA dlloader X Driver  460.56  Tue Feb 23 23:25:01 UTC 2021
[    84.082] (II) NVIDIA Unified Driver for all Supported NVIDIA GPUs
[    84.083] (--) Using syscons driver with X support (version 2.0)
[    84.084] (--) using VT number 9

[    84.101] (II) Loading sub module "fb"
[    84.101] (II) LoadModule: "fb"
[    84.101] (II) Loading /usr/local/lib/xorg/modules/libfb.so
[    84.103] (II) Module fb: vendor="X.Org Foundation"
[    84.103] 	compiled for 1.20.9, module version = 1.0.0
[    84.103] 	ABI class: X.Org ANSI C Emulation, version 0.4
[    84.103] (II) Loading sub module "wfb"
[    84.103] (II) LoadModule: "wfb"
It seems that the next step for Xorg should have been loading a "framebuffer" module, if I'm not mistaken. Perhaps inside your virtualized environment there is something missing.

You could experiment with a higer level of logging; see startx(1) and Xorg(1). For example:
startx -- -verbose 5 -logverbose 5
and termbin paste your Xorg log file afterwards.

[...] I did use a similar method as shown in those links to setup gpu passthrough although I did it specific for arch linux rather than ubuntu or centOS - the gpu passthrough works however as the 980 is recognised in the VM
As this is similar (and working!) try Xorg logging there and compare (diff) the Xorg log output with one with FreeBSD.

Further, you could try asking this on an appropriate FreeBSD mailing list; see Mailing Lists & FreeBSD mailing lists - lists overview (e.g. "freebsd-x11"). My guess would be that the choice between which card to choose isn't the core of the problem, but it'll be interesting to know what device Xorg (specifically on a FreeBSD VM) expects at [ 23.403] as shown in my quote.
 
Thank you for the detailed reply:

Logging with those specified commands of a higher verbosity, leads to a similar output of:

From it, I don't see a difference apart from it now says it's using a higher verbosity as expected
I don't think I'll be able to compare Xorg logs, as the only other virtual machine I have with a graphics card inside of it is my windows 10 virtual machine, which doesn't have Xorg.

I'm not sure what would be missing about the virtualised environment since it replicates an actual PC - but my gtx 980 not showing a display on the secondary monitor is weird, and it only shows through the spice display
 
I've got good news to report, I've made a lot of progress

Continuing from your idea that there may be something wrong with the virtual machine setup, I destroyed the previous one and made a new one! I set the chipset to Q35 and changed from bios -> UEFI and started the install again. I haven't gotten through the installation but there's now the freebsd installer output on my secondary monitor which is a great sign

The installer doesn't seem to be working yet and seems to fail and leads me to be on a mount root prompt, but I'm hopeful I'll figure that one out
I'll try swapping from the disk1 iso to the dvd 1 iso and see if that changes anything
 
Thank you for the link, but sadly it doesn't - and pressing enter just causes the VM to restart

1698322898028.png


Here's the point at which it breaks, putting it into safe mode by another option doesn't seem to help either

Switching the CDROM from Sata to USB seems to solve that issue! I'm now able to see the option in the serial console, and can continue!
 
And.. we're back here
1698328836297.png



However, I think I know why but I'm unsure how to solve it
When I press enter at the boot menu, it seems to be favouring the integrated graphics over the 980. The screen connected to the 980 sees:
(It gets stuck after the boot menu)
1698329053251.png


Whereas, the SPICE / default display sees:

1698329089369.png


Is there a way to change which GPU is used in the boot menu? I tried pressing 5 but that just led me back to the boot menu
 
Edit: corrections.
I think the problem might be in your VM setup.
If you would have set up graphic card pass through for the Nvidia card and made the dedicated card the only graphic card for the VM, it would behave differently. The VM would not know about the iGPU and could by no means address it.
 
The VM has a built in integrated graphics, that is how it gets the spice output - the IGPU is built in, I don't think it's actually an IGPU and rather just a basic GPU but I've been referring to it as that
I'm unsure how I can remove that built in integrated graphics
 
pikadevs You did not specify which is your host and which version of virt-manager you are using. Anyway can you create an Ubuntu or Debian VM and test if the VGA passthrough works?

If you can perhaps the issue is virt-manager and libvirt that are tailored to work with Windows and Linux and not FreeBSD; this is pretty common unfortunately.

 
pikadevs You did not specify which is your host and which version of virt-manager you are using. Anyway can you create an Ubuntu or Debian VM and test if the VGA passthrough works?

If you can perhaps the issue is virt-manager and libvirt that are tailored to work with Windows and Linux and not FreeBSD; this is pretty common unfortunately.

My host is Arch Linux, and I'm using the latest version of virt-manager since I've updated today and arch gets the latest stable packages, although I'm unsure what the actual version number of the latest version is
I do not think that it's the kvm/qemu vm failing to pass the graphics card into freebsd vm, as freebsd sees it and recognises it - people are also able to install freebsd within the VM without a gpu and just relying on the built in graphics accelerator

1698340589599.png


I decided to test it with Pop_OS as it has an option to easily install nvidia drivers, and the results were surprising, but good?
There was originally a startup display on my 2nd monitor, but that froze quickly as soon as the kernel was actually loaded, similar to freeBSD
However, just like freebsd, the graphics card is recognised within the virtual machine and the display is only on the spice display (apart from the frozen screen on the secondary monitor)

This further leads me to believe that the issue stems from the built in VM graphics accelerator, and I'll need to find a way to remove that
 
It's just as I believed!

I've gone to the Video QXL and set that to None - that disables the built in video accelerator. The GPU is no longer cut off and the spice display no longer shows, so now I know it's actually running on the graphics card!

startx still fails with the no screen detected error but I know it's all on the GPU now as I'm actually getting all the display from it and SPICE is no longer taking it
The linux VM had the same issue, and setting that video model to none actually let the GPU display output and work fully, hopefully that'll be the same case for freebsd and the no screens error is a different more unrelated error
 
More progress!

Perhaps you noticed already, but the graphics cards are "moving"
Code:
>> from your message #31
[   610.043] (!!) More than one possible primary device found
[   610.043] (--) PCI: (0@0:2:0) 1b36:0100:1af4:1100 rev 5, Mem @ 0xf4000000/67108864, 0xf8000000/67108864, 0xfd0d8000/8192, I/O @ 0x0000c180/32, BIOS @ 0x????????/65536
[   610.043] (--) PCI: (0@0:9:0) 10de:13c0:10de:13c0 rev 161, Mem @ 0xfc000000/16777216, 0xe0000000/268435456, 0xf0000000/33554432, I/O @ 0x0000c080/128, BIOS @ 0x????????/65536
versus
Code:
>> from your message #35
[    31.902] (!!) More than one possible primary device found
[    31.902] (--) PCI: (0@0:1:0) 1b36:0100:1af4:1100 rev 5, Mem @ 0x84000000/67108864, 0x80000000/67108864, 0x89f80000/8192, I/O @ 0x000070c0/32, BIOS @ 0x????????/65536
[    31.902] (--) PCI: (6@0:0:0) 10de:13c0:10de:13c0 rev 161, Mem @ 0x88000000/16777216, 0x382000000000/268435456, 0x382010000000/33554432, I/O @ 0x00006000/128, BIOS @ 0x????????/65536
If you still have that (old) hardcoded BusID, have a look at that. If one card is your target Nvidia card and the other a non-Nvidia card (use pciconf -lv), then you could perhaps get by without specifying the BusID
 
Yep!

I did notice that but only from reading other freebsd forums, so it's good have it reinforced that it actually does that

I am no longer specifying the BusID since there's only one card now so hopefully that goes well


The new startx log now says nothing about 2 primary devices which is good, now I just need to find what's causing the no screens error
 
StartX is now working! I'm now seeing the window manager :D

What I did is I removed the config file (driver-nvidia.conf) then tried it again and got a framebuffer error. Then I just made the exact same one again and it seems to work!

Thanks so much for your help Erichans, you've helped me tons, and thank you everyone else who also helped me! Hopefully installing KDE and the rest shouldn't be too bad now
 
If you like, mark your topic as Solved.

… removed the config file (driver-nvidia.conf) then tried it again and got a framebuffer error. Then I just made the exact same one again and it seems to work! …

I might have suggested NVIDIA's utility, instead of (earlier) struggles with manual configuration.

x11/nvidia-xconfig



Postscript, 2024-03-03

Code:
[    19.443]
X.Org X Server 1.21.1.8
X Protocol Version 11, Revision 0
[    19.443] Current Operating System: FreeBSD freebsd 13.2-RELEASE-p4 FreeBSD 13.2-RELEASE-p4 GENERIC amd64
…

… passing one full gpu into the vm as I have two graphics cards …

… stuck after the boot menu)

1698329053251.png



EFI boot of 13.2-RELEASE-p4 should normally present a beastie icon that is not ASCII art, and what's pictured is symptomatic of a bug that was fixed in 13.1-RELEASE.

pikadevs if ever you mark this topic as solved, please add a comment to tell which version of FreeBSD was pictured.

Thanks
 
Last edited:
Sorry for the extremely long delay, the post is now marked as solved, the version of FreeBSD pictured was FreeBSD 13.2 Release ("FreeBSD-13.2-RELEASE-amd64-dvd1.iso")
 
Back
Top