Folks: Let's get Claude Code native installer working for FreeBSD

Folks,

tl;dr Please comment / upvote / ask where this is on https://github.com/anthropics/claude-code/issues/30640 where FreeBSD is currently a 0 class citizen for their installer.

I use Claude Code and it has helped me reduce toil and learn FreeBSD a lot over the last year. Heck, I even (kinda) ported a NeXT-like desktop from Linux to FreeBSD with it. It's been a huge help for me. But recently, its new native installer works on every OS -- except FreeBSD. I opened an issue for this, but it was closed, I think, due to lack of community outcry.

I think we need to help CC be successful on our platform. I work with recent college grads and they are steeped in these tools. They're skeptical and wary of slop (yay), but this is integrated into their lives like Insta(gram). This isn't going away and knowing smirks about how you can toss it a Gordian knot problem isn't going to turn back the tide. My fear is that an inability to support these tools will lead to irrelevance.

I've published a blog post with my fuller logic, but my short, humble request is that many thumbs up or comments on this issue can help get this delivered.
 
Gone install ollama as i have no money ...
Maybe try the Chinese copies:


 
My fear is that an inability to support these tools will lead to irrelevance.
These are no gordian knot problems for AI. It is going to provide a statistical answer. It doesn't possess curiosity or common sense. If it has data in the model, if there is enough data online to input, it may provide a statistically sound answer.

And that's exactly what is happening here. Claude has assisted you in porting something because it is knowledgeable about FreeBSD, and there are a lot of fresh resources online about FreeBSD.

So, it all works out because FreeBSD is open source and the knowledge is open source.
We don't have to "specially fit" the OS for the purpose of being scrapped by LLM agents.

but this is integrated into their lives like Insta(gram).

This is very bad analogy to make, unfortunately a correct one. Instagram is a service literally fit for children and imbeciles.
We have a number of AI threads here, maybe there it's worth to discuss the impact of this kind of 'technology' on the youth.

They are automating parts of their early career grind. Which is very, very bad. You need to get on with the grind and learn how to learn, from your own mistakes and successes. You cannot outsource that to a non-deterministic tool, not that early in career anyways.
 
Interesting typo
InterestingTypo.jpg
 
Until now, all AI development aid systems that I have seen take more time than doing the development for real. I didn't look for some time. Can it already make a working Qbasic Tetris game?
 
  • Like
Reactions: cy@
now, now, the LLM can, with access to all of github, and a big team of people wrangling its output, can produce a C compiler that can't handle `hello world`, and produces executables that are 100,000 times slower than GCC without optimization.

clearly, future of technology here.
 
For what I understood, you need a premium account to use Claude-Code, then by connecting to its server api, you can prompt from your terminal, it can perform some actions on your behalf on your computer, and this is pretty scary.
 
it's like we keep saying: if we wanted to type words to a text box and get replies that gladhandedly incorrect us about how to do our job, we would post on hackernews
 
For what I understood, you need a premium account to use Claude-Code, then by connecting to its server api, you can prompt from your terminal, it can perform some actions on your behalf on your computer, and this is pretty scary.

It is more complex. Google's Gemini-powered Antigravity IDE wiped out a whole filesystem, famously.

If you run inside Emacs in gptel it is my understanding that the LLM cannot access anything outside. I have no idea what the native Claude Code app does here.

But either way I am running LLMs on their own OS install.

And the premium account is relative. The $17/month account can be used up pretty quickly, and once there you are instantly north of $100/month. It depends on how many tokens you feed in and get out. Coding with LLMs can be noisy, so the tokens go quick.
 
The $17/month account can be used up pretty quickly, and once there you are instantly north of $100/month. It depends on how many tokens you feed in and get out. Coding with LLMs can be noisy, so the tokens go quick.
I hadn't thought about that. Was getting ready to give it a try soon so that's good to know.
Do they automatically bill you for that or do they just shut you down till you pay?
 
I hadn't thought about that. Was getting ready to give it a try soon so that's good to know.
Do they automatically bill you for that or do they just shut you down till you pay?

I just started with Anthropic myself, so I do not know yet. They certainly don't bump you up to the higher plan automatically.

In github copilot (now canceled) I had surprise extra charges from when I apparently selected premium models to answer some queries.

Around the new year Anthropic had a dustup about billing that some users found surprising. Worth looking up. As I said, coding use of LLMs burns through quite a few tokens compared to just doing chitchat.
 
Back
Top