what AI client we can use it in my freebsd 15 console ?

dear all:
i want to install some AI client in my pc , can you show me something ? thanks.
when i meet some problems , i can ask AI first, then ask in this forum to save times. thanks.
 
ollama and lama.cpp by default run in a terminal.

An Emacs interface like gptel can also be used.

There is a port for claude-code
 
Looking for it now and then. Problem is that AI web services aren't going to interface with something to be nested in a text-interface without any credit. They all seem to make it complicated. It's ofcourse possible to just rip the results from a webpage but I'm not sure if that's legal.

There should actually be a decentral solution like bittorrent but communicating LLM members instead of movie and game sharing. I would give it some local resources.
 
Dear cracauer@:

ollama-0.13.5_3 Run Llama 2, Mistral, and other large language models
ollama-hpp-0.9.4 Modern, Header-only C++ bindings for the Ollama API
i am not professional. which app i need to install ? then what is my next. thanks. please show me the detail.
 
Install the ollama package.

Then:
Code:
ollama run --verbose llama3.2:latest

What kind of CPU, GPU and RAM do you have?
 
CPU: 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz (1382.40-MHz K8-class CPU),no GPU , 16G RAM..thanks.
 
Yes, it's a resource-intensive process as you can imagine, and usually requires the use of a discrete GPU. There are ways to use the "cloud" if you only have an iGPU at your disposal. Obviously, that means it's not "local" anymore. There are tutorials for that, but I have no idea if this would work on FreeBSD or if this requires Linux-specific software.
 
So you need to find a cloud AI provider that has an API and run that in ollama. You already know that Althropic doesn't like your country, what about OpenAI?
 
ollama port has Vulkan issues on my setup, it just won't load the libggml-vulkan that is installed.

I have about 6 times more cores than OP's computer, CPU inference is slow. The pace of response makes 300bps serial connection feel speedy.

llama.cpp works OK.

We also have py-aider port. I'm not that keen on using it, it is legacy software at this point.
 
ollama port has Vulkan issues on my setup, it just won't load the libggml-vulkan that is installed.

I have about 6 times more cores than OP's computer, CPU inference is slow. The pace of response makes 300bps serial connection feel speedy.

llama.cpp works OK.

We also have py-aider port. I'm not that keen on using it, it is legacy software at this point.

ollama can run LLMs that are in the cloud.
 
Hi fff2024g, if you want to work with AI inside terminal try Aider, it's in py311-aider_chat port. There is also Copilot in github-copilot-cli port, Codex CLI partially works, then maybe PyPI LLM, Junie CLI and others. You can connect some of them to a free LLM API account on Groq or Openrouter for example.

I work with Codex, on another platform. It's polished, but not free. Aider is a monster, couldn't stand it and started developing custom shell script last week. It's 13 KB in size, can do one-shot requests and simple chats and should work in any POSIX shell. It requires only Curl and jq and some LLM API account, local or remote. I'm willing to share the script on request.
 
Hi fff2024g, if you want to work with AI inside terminal try Aider, it's in py311-aider_chat port. There is also Copilot in github-copilot-cli port, Codex CLI partially works, then maybe PyPI LLM, Junie CLI and others. You can connect some of them to a free LLM API account on Groq or Openrouter for example.

I work with Codex, on another platform. It's polished, but not free. Aider is a monster, couldn't stand it and started developing custom shell script last week. It's 13 KB in size, can do one-shot requests and simple chats and should work in any POSIX shell. It requires only Curl and jq and some LLM API account, local or remote. I'm willing to share the script on request.
dear karel :
try it ..thanks.
 
Hi fff2024g, if you want to work with AI inside terminal try Aider, it's in py311-aider_chat port. There is also Copilot in github-copilot-cli port, Codex CLI partially works, then maybe PyPI LLM, Junie CLI and others. You can connect some of them to a free LLM API account on Groq or Openrouter for example.

I work with Codex, on another platform. It's polished, but not free. Aider is a monster, couldn't stand it and started developing custom shell script last week. It's 13 KB in size, can do one-shot requests and simple chats and should work in any POSIX shell. It requires only Curl and jq and some LLM API account, local or remote. I'm willing to share the script on request.
Dear karel :

Select a credit amount​

Credits are used to pay for AI usage. They can be spent on any model, provider, or plugin on OpenRouter. i have no money to pay the credit in china.... do you know how to active my china bank card to pay for ? thanks.
 
Back
Top