Hello,
Previously I tried to use Ollama on FreeBSD with my Nvidia GPU and it did not work, it uses CPU only, on the host or in a jail.
Recently I have been using llama.cpp (version 8182, 28 Feb 2026) in a jail with Nvidia GPU support without issue, I will dump some
notes for experienced users...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.