dear cracauer@ :ollama and lama.cpp by default run in a terminal.
An Emacs interface like gptel can also be used.
There is a port for claude-code
CPU: 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz (1382.40-MHz K8-class CPU),no GPU , 16G RAM..thanks.
you mean, my pc was not so hard . right ? thanksLocal LLMs will be no fun on that.
ollama port has Vulkan issues on my setup, it just won't load the libggml-vulkan that is installed.
I have about 6 times more cores than OP's computer, CPU inference is slow. The pace of response makes 300bps serial connection feel speedy.
llama.cpp works OK.
We also have py-aider port. I'm not that keen on using it, it is legacy software at this point.
dear karel :Hi fff2024g, if you want to work with AI inside terminal try Aider, it's in py311-aider_chat port. There is also Copilot in github-copilot-cli port, Codex CLI partially works, then maybe PyPI LLM, Junie CLI and others. You can connect some of them to a free LLM API account on Groq or Openrouter for example.
I work with Codex, on another platform. It's polished, but not free. Aider is a monster, couldn't stand it and started developing custom shell script last week. It's 13 KB in size, can do one-shot requests and simple chats and should work in any POSIX shell. It requires only Curl and jq and some LLM API account, local or remote. I'm willing to share the script on request.
Dear karel :Hi fff2024g, if you want to work with AI inside terminal try Aider, it's in py311-aider_chat port. There is also Copilot in github-copilot-cli port, Codex CLI partially works, then maybe PyPI LLM, Junie CLI and others. You can connect some of them to a free LLM API account on Groq or Openrouter for example.
I work with Codex, on another platform. It's polished, but not free. Aider is a monster, couldn't stand it and started developing custom shell script last week. It's 13 KB in size, can do one-shot requests and simple chats and should work in any POSIX shell. It requires only Curl and jq and some LLM API account, local or remote. I'm willing to share the script on request.