32GB...but I wasn't asking about what I can run, I was wondering which ones you feel are best at codingThis might depend on how much ram you have on your pc
import ollama
# Create a client to interact with the Ollama server
client = ollama.Client()
# This is a simple chat-style interaction
# The 'chat' method is recommended for most uses
response = client.chat(
model='phi3',
messages=[
{'role': 'user', 'content': 'What is the capital of France?'}
]
)
# Print the model's response
print(response['message']['content'])
# You can also use the 'generate' method for a single-turn completion
response = client.generate(
model='phi3',
prompt='What is the capital of Germany?'
)
# The response from generate is a bit different
print(response['response'])
You know how awesome it'll be when we can have one small model specifically trained on everything FreeBSD. Because it'll be so highly specialized, it'll be small in size, performant, brilliant for tuning FreeBSD - and you can ship it with FreeBSD and have it literally be lightning fast and maybe even integrated in shell. SirDice, yes?i use ollama ...
llama list | grep -i code
starcoder:1b 77e6c46054d9 726 MB 3 weeks ago
starcoder2:15b 21ae152d49e0 9.1 GB 3 weeks ago
codegemma:7b 0c96700aaada 5.0 GB 3 weeks ago
codegeex4:9b 867b8e81d038 5.5 GB 3 weeks ago
starcoder2:3b 9f4ae0aff61e 1.7 GB 3 weeks ago
starcoder:3b 847e5a7aa26f 1.8 GB 3 weeks ago
codegemma:2b 926331004170 1.6 GB 3 weeks ago
dolphincoder:7b 677555f1f316 4.2 GB 3 weeks ago
codellama:13b 9f438cb9cd58 7.4 GB 3 weeks ago
yi-coder:1.5b 186c460ee707 866 MB 3 weeks ago
qwen2.5-coder:3b f72c60cabf62 1.9 GB 3 weeks ago
codeqwen:7b df352abf55b1 4.2 GB 3 weeks ago
starcoder:7b 53fdbc3a2006 4.3 GB 3 weeks ago
starcoder2:7b 1550ab21b10d 4.0 GB 3 weeks ago
qwen2.5-coder:1.5b d7372fd82851 986 MB 3 weeks ago
deepseek-coder:6.7b ce298d984115 3.8 GB 3 weeks ago
sqlcoder:15b 93bb0e8a904f 9.0 GB 3 weeks ago
dolphincoder:15b 1102380927c2 9.1 GB 3 weeks ago
codellama:7b 8fdf8f752f6e 3.8 GB 3 weeks ago
deepseek-coder:1.3b 3ddd2d3fc8d2 776 MB 3 weeks ago
qwen2.5-coder:0.5b 4ff64a7f502a 397 MB 3 weeks ago
sqlcoder:7b 77ac14348387 4.1 GB 3 weeks ago
Because it was essentially asked "how long is a string" (but at least it got most of the way through chapter one of SICP before it found a somewhat suitable method for calculating a fib). You sort of have to know where you want to go before you start blindly asking it to produce code along with unit tests.It shows one thing I dislike about LLM code so far - no edge cases.
Neither the function code nor the unit tests deal with out-of-range input values such as -1.
(define (fib n)
(fib-iter 1 0 n))
(define (fib-iter a b count)
(if (= count 0)
b
(fib-iter (+ a b) a (- count 1))))