Laptop with Intel integrated + dedicated nVidia GPU

jbo@

Developer
I own a Lenovo ThinkPad P2000 "mobile workstation" (hexacore Intel Xeon E-2176M, nVidia Quadro P2000 Max-Q GPU, 32-GB ECC RAM, 1TB NVMe SSD).

Windows 10 is capable of using both the Intel integrated UHD P630 graphics as well as the dedicated nVidiaQuadro P2000 GPU. Not only can it switch between the two but it seems to be able to successfully use the integrated GPU for 2D tasks and only offloading heavy 3D stuff to the dedicated GPU.

I'm currently on a quest of making FreeBSD become my primary (and hopefully only) desktop OS. Windows 10 via bhyve seems to work well enough for the few use cases where I have to use Windows.
So far I managed to get two workstations "converted" from Windows 10 to FreeBSD 13.0 - all of which have either only intel integrated GPUs or only an nVidia dedicated GPU (always Quadro series).

What I'd like to know is how this particular dual-GPU scenario would play out in FreeBSD 13.0. I don't expect to be able to use both GPUs simultaneously like Windows 10 seems to be able to do with DirectX 12 but rather I'd already be satisified to turn off / disable the dedicated GPU when I don't need it (which is 90% of the time) and only firing up the dedicated Quadro GPU when I need it. It's even okay if this is a manual action (hopefully without rebooting tho). Is this possible in/with FreeBSD 13.0 and x11/nvidia-driver?

Could somebody share technical knowledge, experience and other information relevant for this?
 
By using Vulkan you are able to use both intel gpus and nvidia gpus.
Here an output of part of vulkaninfo
Code:
 Devices: count = 2
                GPU id = 0 (Intel(R) HD Graphics 530 (SKL GT2))
                Layer-Device Extensions: count = 3
                        VK_EXT_debug_marker     : extension revision 4
                        VK_EXT_tooling_info     : extension revision 1
                        VK_EXT_validation_cache : extension revision 1

                GPU id = 1 (NVIDIA GeForce GTX 960M)
                Layer-Device Extensions: count = 3
                        VK_EXT_debug_marker     : extension revision 4
                        VK_EXT_tooling_info     : extension revision 1
                        VK_EXT_validation_cache : extension revision 1
 
So I take it that this would apply when writing a (new) application. However, I'm interested in just using X11 with i3 and all the existing software like I would on a regular one-GPU desktop machine.
Please correct me if I'm wrong on this.
 
Are you suggesting that this should just work out of the box with only minimal efforts? Do you happen to have experience running such a setup yourself? Could you elaborate a bit more and share your actual real-world use experiences?
 
If you are familiar with ports-mgmt/poudriere it is a matter of using the following repository as an overlay


And build the drivers (here an example with my current config)
poudriere bulk -j 13-0amd64 -O overlay -p HEAD x11/nvidia-driver x11/linux-nvidia-libs

And then using this file for Xorg.conf
/usr/local/etc/X11/xorg.conf.d/30-nvidia.conf
Code:
Section "ServerLayout"
  Identifier "whatever"
  Screen     0 "iGPU"
  Screen     1 "dGPU"
EndSection

Section "Device"
  Identifier "iGPU-dev"
  Driver     "modesetting"
  BusID      "PCI:0:2:0"
EndSection

Section "Device"
  Identifier "dGPU-dev"
  Driver     "nvidia"
  BusID      "PCI:1:0:0"
EndSection

Section "Monitor"
  Identifier "iGPU-dsp"
EndSection

Section "Monitor"
  Identifier "dGPU-dsp"
EndSection

Section "Screen"
  Identifier "iGPU"
  Device     "iGPU-dev"
  Monitor    "iGPU-dsp"
EndSection

Section "Screen"
  Identifier "dGPU"
  Device     "dGPU-dev"
  Monitor    "dGPU-dsp"
EndSection

Then you just have to add nvidia-modeset in kld_list in /etc/rc.conf
sysrc kld_list+=" nvidia-modeset"

After doing that, to launch a program on the NVIDIA GPU (example in fish shell)
env __NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia myOpenGLProgramToRun
 
someone can help me to understand well what's happening here ? with this patch I can render my 3D projecs in blender with cycles ? cycles requires the cuda libraries,but that I know,they aren't supported by freebsd. The patch adds the calls to the linux drivers and blender (or even some other graphic tool) will be able to recognize 3d acceleration of my 2080 ti ? should I install the cuda libraries,anyway ? or the patch is everything I need ?
 
I'd already be satisified to turn off / disable the dedicated GPU when I don't need it (which is 90% of the time) and only firing up the dedicated Quadro GPU when I need it. It's even okay if this is a manual action (hopefully without rebooting tho). Is this possible in/with FreeBSD 13.0 and x11/nvidia-driver

Could somebody share technical knowledge, experience and other information relevant for this?
I have two Thinkpad W520's with Nvidia Optimus Technology and use the Nvidia Quadro 1000M chip as the dedicated GPU. One is running FreeBSD 12.2-RELEASE-7 and the other FreeBSD 12.1-RELEASE-3.

Are you suggesting that this should just work out of the box with only minimal efforts?
Yes.
I don't know what poudriere has to do with it. I've never used it in 16 years.

Do you happen to have experience running such a setup yourself?
Yes.
I just posted a screenshot of each one.

Could you elaborate a bit more and share your actual real-world use experiences?

In the BIOS I set Nvidia to "Discreet Graphics". Yours is newer and may not have that setting.
I use the x11/nvidia-driver-304 driver.
Install x11/nvidia-settings and x11/nvidia-xconfig.
Run # nvidia-xconfig
In /etc/rc.conf edit in:
Code:
linux_enable="YES"

In /boot/loader.conf I have:
Code:
linux_load="YES"
nvidia-modset_load="YES"

My /etc/x11/xorg.conf
Code:
# nvidia-xconfig: X configuration file generated by nvidia-xconfig
# nvidia-xconfig:  version 460.67


Section "ServerLayout"
    Identifier     "Layout0"
    Screen      0  "Screen0" 0 0
    InputDevice    "Keyboard0" "CoreKeyboard"
    InputDevice    "Mouse0" "CorePointer"
EndSection

Section "Files"
EndSection

Section "InputDevice"

    # generated from default
    Identifier     "Mouse0"
    Driver         "mouse"
    Option         "Protocol" "auto"
    Option         "Device" "/dev/sysmouse"
    Option         "Emulate3Buttons" "no"
    Option         "ZAxisMapping" "4 5"
EndSection

Section "InputDevice"

    # generated from default
    Identifier     "Keyboard0"
    Driver         "keyboard"
EndSection

Section "Monitor"
    Identifier     "Monitor0"
    VendorName     "Unknown"
    ModelName      "Unknown"
    Option         "DPMS"
EndSection

Section "Device"
    Identifier     "Device0"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
EndSection

Section "Screen"
    Identifier     "Screen0"
    Device         "Device0"
    Monitor        "Monitor0"
    DefaultDepth    24
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection
 
I have two Thinkpad W520's with Nvidia Optimus Technology and use the Nvidia Quadro 1000M chip as the dedicated GPU. One is running FreeBSD 12.2-RELEASE-7 and the other FreeBSD 12.1-RELEASE-3.


Yes.
I don't know what poudriere has to do with it. I've never used it in 16 years.


Yes.
I just posted a screenshot of each one.



In the BIOS I set Nvidia to "Discreet Graphics". Yours is newer and may not have that setting.
I use the x11/nvidia-driver-304 driver.
Install x11/nvidia-settings and x11/nvidia-xconfig.
Run # nvidia-xconfig
In /etc/rc.conf edit in:
Code:
linux_enable="YES"

In /boot/loader.conf I have:
Code:
linux_load="YES"
nvidia-modset_load="YES"

My /etc/x11/xorg.conf
Code:
# nvidia-xconfig: X configuration file generated by nvidia-xconfig
# nvidia-xconfig:  version 460.67


Section "ServerLayout"
    Identifier     "Layout0"
    Screen      0  "Screen0" 0 0
    InputDevice    "Keyboard0" "CoreKeyboard"
    InputDevice    "Mouse0" "CorePointer"
EndSection

Section "Files"
EndSection

Section "InputDevice"

    # generated from default
    Identifier     "Mouse0"
    Driver         "mouse"
    Option         "Protocol" "auto"
    Option         "Device" "/dev/sysmouse"
    Option         "Emulate3Buttons" "no"
    Option         "ZAxisMapping" "4 5"
EndSection

Section "InputDevice"

    # generated from default
    Identifier     "Keyboard0"
    Driver         "keyboard"
EndSection

Section "Monitor"
    Identifier     "Monitor0"
    VendorName     "Unknown"
    ModelName      "Unknown"
    Option         "DPMS"
EndSection

Section "Device"
    Identifier     "Device0"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
EndSection

Section "Screen"
    Identifier     "Screen0"
    Device         "Device0"
    Monitor        "Monitor0"
    DefaultDepth    24
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection

what about to apply the patch ? can u explain how to do that ?
 
what about to apply the patch ? can u explain how to do that ?
I'm sorry, I wish I could help you, but I don't apply any "patch".

The way I laid it out is exactly the way I get Nvidia to run on two W520's with Quadro 1000M and two T61's with a Quadro NVS 140M chipset. Here is a screenshot of both chips at work:

sweet_papa_john.jpg dreamgirl.jpg
 
So far, I'm only offering OpenCL / NVENC support through aforementioned hacks. And GPU offloading, mostly by accident. I actually tried Cycles with CUDA and wasn't impressed by Blender's implementation (vs CPU speed). That Blender setup is quite tricky, so I'm not going to explain it.
 
So far, I'm only offering OpenCL / NVENC support through aforementioned hacks. And GPU offloading, mostly by accident. I actually tried Cycles with CUDA and wasn't impressed by Blender's implementation (vs CPU speed). That Blender setup is quite tricky, so I'm not going to explain it.

My cpu has 16 threads. I tried to render with the cpu,but the speed it is not comparable with my graphic card,RTX 2080 ti. Take also in consideration that I used Davinci Resolve and even there I need to render with the gpu,because it is very powerful. Bus as you probably know,blender and Davinci and maybe different graphic tools need CUDA to draw the computational power from my graphic card. FreeBSD does not support CUDA. In addition,some CG tools don't run under FreeBSD,like Davinci Resolve and others. The only way that I have to have everything is to run Linux virtualized with bhyve ? Unfortunately bhyve lacks some features and my gpu when passed through,does not work properly. I think that the solution is not to far,because the nouveau driver worked with my card. Again here,CUDA does not support the nouveau driver so,its totally not useful to use it. Correct me if I didn't understand something. You are welcome. I say what I experience every day,but I could do that without the proper knowledge. I'm sorry for that.
 
ziomario , Poudriere is basically a way to use FreeBSD jails to build packages using the ports tree. The resulting packages are deposited into a directory that you can point pkg at, and install packages on your system that way. "Using a repo with an overlay" basically refers to the idea that with Poudriere, you can prepare your own copy of the tree, and try applying different patches within that copy, and compile the whole thing in there. When the package pops out (like a chicken laying an egg), you can install it on the base system and see how it works. "applying different patches" is the overlay that monwarez is talking about.

Actually applying patches - I don't really know how to do that, but there are quite a few tutorials on the Internet. FreeBSD does have quite a few userland utilities, and /bin/patch is probably one of them.
 
Back
Top