How to run Stable Diffusion WebUI on FreeBSD...

Hello to everyone.

I'm trying to install the AUTOMATIC1111 webui for stable diffusion within my /compat/ubuntu distro using the FreeBSD linuxulator. You can find the proper repository for this tool here :


at some point of the installation,I should do :

Code:
$ python3 ./launch.py

Python 3.8.10 (default, Mar 15 2022, 12:22:08)
[GCC 9.4.0]
Commit hash: 360feed9b55fb03060c236773867b08b4265645d
Installing torch and torchvision
Traceback (most recent call last):
  File "./launch.py", line 294, in <module>
    prepare_environment()
  File "./launch.py", line 209, in prepare_environment
    run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'")
  File "./launch.py", line 73, in run_python
    return run(f'"{python}" -c "{code}"', desc, errdesc)
  File "./launch.py", line 49, in run
    raise RuntimeError(message)
RuntimeError: Error running command.
Command: "/usr/bin/python3" -c "import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'"
Error code: 1
stdout: <empty>
stderr: Traceback (most recent call last):
  File "<string>", line 1, in <module>
AssertionError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check

the error says that Torch is not able to detect my GPU. I don't know the reason,I'm not a developer,but I can make some assumptions. Maybe Torch does not detect CUDA. But I know,from the past,that shkhln has been able to allow Blender to detect CUDA natively on FreeBSD. The whole procedure about how to do that is explained here :


In summary,if I want that Blender detects CUDA,I should launch it with this command :

Code:
./nv-sglrun blender

on the same way,I've thought that the same method could be applied to python3 with the script launch.py and I tried to write something like this :

Code:
mario@marietto:/home/marietto/Desktop/New/CG/CUDA/libc6-shim/bin # ./nv-sglrun /compat/ubuntu/usr/bin/python3 /compat/ubuntu/home/marietto/stable-diffusion-webui/./launch.py

unfortunately I've got the same error. At this point I imagine the file launch.py should be modified in some way to use the GPU.
 
have you tried to follow the suggestion to skip CUDA test during launch?
Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
which kind of error report if you try?
 
to do this you should add --skip-torch-cuda-test to COMMANDLINE_ARGS this part of the script is typed inside launch.py and to modify it you need to change webui-user.sh this line:

# Commandline arguments for webui.py, for example: export COMMANDLINE_ARGS="--medvram --opt-split-attention"
export COMMANDLINE_ARGS=""
 
to do this you should add --skip-torch-cuda-test to COMMANDLINE_ARGS this part of the script is typed inside launch.py and to modify it you need to change webui-user.sh this line:

# Commandline arguments for webui.py, for example: export COMMANDLINE_ARGS="--medvram --opt-split-attention"
export COMMANDLINE_ARGS=""

no,I don't want to skip the CUDA test because I want that it uses CUDA. Without using CUDA it makes no sense at all to use stable diffusion because it is a graphic intensive tool.
 
I'm sorry if this is a Linux problem,but it is still connected to the FreeBSD world,so,I think that I can ask here for a fix. I'm stuck here :

Code:
mario-freebsd@marietto:/home/marietto/Desktop/Files/Scripts/BSD # /compat/ubuntu/bin/cc --sysroot=/compat/ubuntu -m64 -std=c99 -Wall -ldl -fPIC -shared -o dummy-uvm.so uvm_ioctl_override.c
cc: error trying to exec 'cc1': execvp: No such file or directory

mario-freebsd@marietto:/home/marietto/Desktop/Files/Scripts/BSD # chroot /compat/ubuntu /bin/bash

mario-linuxulator@marietto:/# dpkg -l | grep gcc | awk '{print $2}'

gcc
gcc-10
gcc-10-base:amd64
gcc-8
gcc-8-base:amd64
gcc-9
gcc-9-base:amd64
libgcc-10-dev:amd64
libgcc-8-dev:amd64
libgcc-9-dev:amd64
libgcc-s1:amd64

mario-linuxlator@marietto:/# which gcc
/usr/bin/gcc

mario-linuxlator@marietto:/# which cc1
nothing

mario-linuxlator@marietto:/# apt-get install build-essential

Reading package lists... Done
Building dependency tree
Reading state information... Done
build-essential is already the newest version (12.8ubuntu1).

mario-linuxlator@marietto:/# whereis cc1
cc1:

mario-freebsd@marietto:/home/marietto/Desktop/Files/Scripts/BSD # find /compat/ubuntu -name cc1

/compat/ubuntu/usr/lib/gcc/x86_64-linux-gnu/9/cc1
/compat/ubuntu/usr/lib/gcc/x86_64-linux-gnu/10/cc1

mario@marietto:/home/marietto/Desktop/Files/Scripts/BSD # cp /compat/ubuntu/usr/lib/gcc/x86_64-linux-gnu/10/cc1 /compat/ubuntu/bin

mario-freebsd@marietto:/home/marietto/Desktop/Files/Scripts/BSD # /compat/ubuntu/bin/cc --sysroot=/compat/ubuntu -m64 -std=c99 -Wall -ldl -fPIC -shared -o dummy-uvm.so uvm_ioctl_override.c

In file included from /compat/ubuntu/usr/include/dlfcn.h:22,
from uvm_ioctl_override.c:3:
/compat/ubuntu/usr/include/features.h:461:12: fatal error: sys/cdefs.h: No such file or directory
461 | # include <sys/cdefs.h>
| ^~~~~~~~~~~~~
compilation terminated.

mario-freebsd@marietto:/home/marietto/Desktop/Files/Scripts/BSD # chroot /compat/ubuntu /bin/bash         
mario-linuxlator@marietto:/# apt-get install g++-multilib
....
OK

mario-linuxlator@marietto:/# exit
exit

mario-freebsd@marietto:/home/marietto/Desktop/Files/Scripts/BSD # /compat/ubuntu/bin/cc --sysroot=/compat/ubuntu -m64 -std=c99 -Wall -ldl -fPIC -shared -o dummy-uvm.so uvm_ioctl_override.c

cc: fatal error: -fuse-linker-plugin, but liblto_plugin.so not found
compilation terminated.

mario-linuxulator@marietto:/# update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-10 100 --slave /usr/bin/g++ g++ /usr/bin/g++-10 --slave /usr/bin/gcov gcov /usr/bin/gcov-10
update-alternatives: using /usr/bin/gcc-10 to provide /usr/bin/gcc (gcc) in auto mode

mario-linuxlator@marietto:/# gcc -print-prog-name=cc1
/usr/lib/gcc/x86_64-linux-gnu/10/cc1

mario-linuxulator@marietto:/# exit
exit

mario-freebsd@marietto:/home/marietto/Desktop/Files/Scripts/BSD # /compat/ubuntu/bin/cc --sysroot=/compat/ubuntu -m64 -std=c99 -Wall -ldl -fPIC -shared -o dummy-uvm.so uvm_ioctl_override.c

cc: fatal error: -fuse-linker-plugin, but liblto_plugin.so not found
compilation terminated.
 
I'm sorry if this is a Linux problem,but it is still connected to the FreeBSD world,so,I think that I can ask here for a fix. I'm stuck here :

Code:
mario-freebsd@marietto:/home/marietto/Desktop/Files/Scripts/BSD # /compat/ubuntu/bin/cc --sysroot=/compat/ubuntu -m64 -std=c99 -Wall -ldl -fPIC -shared -o dummy-uvm.so uvm_ioctl_override.c

Sorry the tutorial wasn't written for an Ubuntu chroot I'm not surprised you're having issues. If you follow the tutorial as written it will work. I have been using PyTorch this way on several machines for months.
 
Hello.

he says to do :

Code:
# /compat/linux/bin/cc --sysroot=/compat/ubuntu -m64 -std=c99 -Wall -ldl -fPIC -shared -o dummy-uvm.so uvm_ioctl_override.c

but in the instructions he hasn't included the file dummy-uvm.so,but only the file uvm_ioctl_override.c

someone knows where I can get the missing file ? thanks.
 
Last edited:
I made some progress with this project,but unfortunately I've got another error later and actually I'm frozen there. Long story short :

Verm : You could try adding CUDA_VISIBLE_DEVICES before the LD_PRELOAD and cycle through the numbers to see if it picks up on your card you may need to use the BUS ID too but try it this way first.
CUDA_VISIBLE_DEVICES=0` LD_PRELOAD=./dummy-uvm.so python3 -c 'import torch; print(torch.cuda.get_device_name(0))'

Change the 0 to 1,2,3,4 as required to see if it finds your device.

me : it Didn't work. I tried from 0 to 4 :

Code:
[marietto@marietto ~]$ bash
[marietto@marietto ~]$ source /compat/linux/home/marietto/Desktop/stable-diffusion/conda/etc/profile.d/conda.sh
[marietto@marietto ~]$ conda activate
(base) [marietto@marietto ~]$ conda activate pytorch
(pytorch) [marietto@marietto ~]$ CUDA_VISIBLE_DEVICES=0 LD_PRELOAD=/compat/linux/home/marietto/Desktop/stable-diffusion/dummy-uvm.so python3 -c 'import torch; print(torch.cuda.get_device_name(0))'

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/compat/linux/home/marietto/Desktop/stable-diffusion/conda/envs/pytorch/lib/python3.10/site-packages/torch/cuda/__init__.py", line 329, in get_device_name
    return get_device_properties(device).name
  File "/compat/linux/home/marietto/Desktop/stable-diffusion/conda/envs/pytorch/lib/python3.10/site-packages/torch/cuda/__init__.py", line 359, in get_device_properties
    _lazy_init()  # will define _get_device_properties
  File "/compat/linux/home/marietto/Desktop/stable-diffusion/conda/envs/pytorch/lib/python3.10/site-packages/torch/cuda/__init__.py", line 217, in _lazy_init
    torch._C._cuda_init()

RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
 
Last edited:
Another goal achieved with the patient help of darkbeer : very thanks,darkbeer.

Screenshot_2023-01-20_12-37-52.jpg
 
Back
Top