Ollama set up on Freebsd to run llm's and use Emacs with gptel as a front end

Ollama run large language models on your computer including deepseek-r1, deepseek-coder, mistral and zephyr

In this video i install Ollama on Freebsd 14.2 on a Dell XPS 15 2019 with a NVIDIA GeForce GTX 1650 gpu with 16 gig of ram
on Freebsd 14.2 quarterly release with the 550.127.05 Nvidia driver


I also cover running the ollama lama server and pulling and running models and setting up Emacs with the gptel package as a front end

As well as taking a look at Google Gemini and how to create an access token you can use with a authinfo file for Emacs

ollama

ollama github

ollama notes on github

ollama gpu's

gptel notes on github

emacs init.el

ollama-server
 
I had ollama running a few months back and for me, I thought the process was fairly simple:

This is using my shell framework, but you should be able to get the idea of what it's doing:

lib git:install/net/download.sh

_ollama_install_linux_version() {
_download https://ollama.ai/download/ollama-linux-$_ARCHITECTURE
_download_install_file $_CONF_INSTALL_BIN_PATH/ollama
}

_ollama_install_from_source() {
# https://github.com/ollama/ollama/issues/1102

cd /tmp
git clone https://github.com/prep/ollama.git
cd ollama && git checkout feature/add-bsd-support
go122 generate ./...
go122 build .

# install ollama
cp ollama $_CONF_INSTALL_BIN_PATH/ollama
printf '%s\n' "$$_CONF_INSTALL_BIN_PATH/ollama" >> $_INSTALLED_FILES

### add ollama to list of installed files
cd ..
rm -rf ollama
}

which ollama > /dev/null 2>&1 || {
_ollama_install_from_source
_sudo chmod +x $_CONF_INSTALL_BIN_PATH/ollama
}

#!/bin/sh

_ollama_install_models() {
# this generates a new private key, we might want to make a backup of it
# ensure 127.0.0.1:11434 is accessible
ollama serve > /dev/null 2>&1 &
local server_pid=$!

for _LLAMA_IMAGE in $_CONF_LLAMA_OLLAMA_MODELS; do
ollama pull $_LLAMA_IMAGE
done

# stop the server
kill $server_pid
}

which ollama > /dev/null 2>&1 && _ollama_install_models

However, I did the same recently on go123 and ollama does not run for me.

EDIT: I forgot to mention, these are the packages I am installing as well:
git
go122
cmake
vulkan-headers
vulkan-loader
 
Hmm, so ollama does not run on go123 unfortunately for whatever reason. Furthermore, I have it installed on my laptop which has a decent GPU and it doesn't seem to be utilizing the GPU.

I have those packages listed above installed and also just double-checked, but I have libvdpau installed as well.

EDIT #1: I checked the GPU compatibility (RTX 3000), and it doesn't list a compute compatible version :(. Does that mean I'm out of luck?

EDIT #2: It does pick up the nvidia card via vulkan. I can only run mistral, but it seems to crap out, llama3.2 doesn't want to run because it expected 255 tensors, but got 254?
 
my gpu wasnt listed either
but it does work

dell xps 15 -2019
16 gb of ram

i can run deepseek-r1:7b
buti i did find some 7b models that wouldnt work

nvidia set up

Code:
Yes Master ? nv-sglrun nvidia-smi
shim init
Fri Feb 21 18:26:33 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.127.05             Driver Version: 550.127.05     CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce GTX 1650        Off |   00000000:01:00.0 Off |                  N/A |
| N/A   55C    P8              2W /   50W |       1MiB /   4096MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+
[
 
Back
Top