A Developer's Guide to Generative AI in FreeBSD

It seems very reasonable and it provides very good information.

In sum it says: AI is an unreliable tool, but it can be very useful sometimes. You may use it, but not blindly. In any case, you remain responsible for the results.

Also:
ThisIsAGoodThread.png
 
I played alot with AI , some AI's are good only for python other for other stuff. Main rule i learned, they really need a stable API, otherwise crap comes out and they start to hallucinate.


PS, Monthly Subscription Plans

Tool
Monthly Price (Individual)Best For
Claude Pro$20/monthWriting and high-level reasoning.
ChatGPT Plus$20/monthAll-around tasks and custom GPTs.
Gemini Advanced$19.99/monthGoogle ecosystem users (includes 2TB storage).
GitHub Copilot$10/monthDedicated coding inside your editor.
 
Note that that site is not official FreeBSD.
I'm aware of that. But I highly valuate cracauer@'s opinions, thoughts, and ideas.


Frankly I actually don't know about OpenBSD - actually it was more of a thread to leave FreeBSD if done so, and I rate those Open-guys more "fundamentalistic/conservative" in a good way, so I just reckon those guys did, or will.
If Linus calls Theo de Raadt "complicated" to me that's a good rating. A sign for somebody staying for a point, and not 'brought round to be open to "some anarchic experiments" anybody can commit to.' (There are points right to do it this way, and points oppose. Or: You can always argue pro or contra. But to do it like anybody else ("swim with the swarm") is always the max BS anyway. (Nobody needs a copy of the swarm when there is the original to join.)
However, if NetBSD did so (thx!), I may switch to it, then.
But I'm convinced so far, FreeBSD will not adapt to it yet.
The pressure of the hype is there, sure. "Everybody is doing it. Why not you? Caveman!"

AI is a technology useful to solve and improve some certain things, if it's used with expertise within its limits - no question.
I use it myself - controlled, and within limits, of course.
But the ([most] heavy) downsides already become apparent.
As you already know yourself, AI can be very nice to relieve you from dull routine jobs - that's exactly what a machine, a computer is all about: You define the picture, but you don't need to do all strenuous handcrafting manual painting work to create it anymore.
Ivan Turkovic wrote some very intelligent, and worth reading articles about that about computers/programming/software development exactly - very recommandable to read. (He's not a AI opponent. I am - within limits, and conditions. He's more professional than me. I may lack words, and maybe I'm also not wrong. Anyway read Ivan {pofessional}! [and me {amateur}])
What I got from my own thoughts, his texts helped me to substantiate (even if he may not intend to. But he thinks clearly, knows software, knows programming, software development, engineering, AI, and its impact on it.)

There is a systematically bug within this AI stuff. (My point, not Ivan's.)

You don't need to be an expert on AI but simply need to know how today's AI by principle works. To know they become stupid if they are not trained (corrected) continously, because they cannot tell right from wrong - they cannot think; they cannot understand.
And they cannot know everything. So in any way, they will do mistakes. Naturally. OK.
BUT:
So, what happens when they do mistakes and cannot tell right from wrong - cannot recognize they made a mistake themselves, and not being corrected by some one who can?
They go stupid.
More and more stupid.
They disintegrate.
That was no problem if there are people knowing better to correct them. But exactly there is the trap.

We saw it with pocket calculators in the 1980s. Math teachers were absolutely right to rebell against to introduce them in elementary classes - but being hushed. Result: The capability of fundamental, most basic calculation dropped to almost zero. (No? So, what's 13 times 7? Without a calculator, in your head, only! Can't do it? My point exactly.)
We saw it at automatic spell checking. Since we write texts with computers with automatic spell checking the amount of wrong written texts exploded.
We germans are known to be overcorrect. That's right. We are trained to be extremely scrupulous bean counting about spelling and grammar within any text of our complicated language. If there was a simple typo in one of the nationwide newspapers it was worth a message in the 8 p.m. news. Until ~ the 1990s. Today you are happy if you can read a single article without any flaw at all.
Point is: People abuse assistance as crutches - just don't walk themselves at all anymore.
Now, back to AI.
Today's older AI users - the grey beards - know programming by writing code by themselves, because they did it - by themselves. So what they use AI for is let it produce the code, then review it with their expertise.
OK.
Now look at somebody learning programming today.
Why learn how to code? The AI does it for you.
You just ask it to write some 3D-shooter written in brainfuck so long until the shit runs as you want it to be - right?
What you wanna review? You don't understand the code. Because you don't learned it. You don't know the language.
Now the computer makes a mistake. And nobody is there to correct it, because the greybeards placing holes into punchcards are died, being laughed at (for placing holes into punch cards), and you cannot handle it, because you never learned it, because you never had to, because the computer did it all for you.
As long as it worked reliably.
So, what once was a tool for the greybeards, who developed it, becomes a crutch, then a substitute, then a replacemant.
Nobody needs to gain knowledge anymore, because the tool povides it.
But the tool needs knowledge to stay reliably.
So, the tool relies on the human's knowledge to stay usably, while the humans trust the tool to provide the knowledge, while at the same time it's going nuts because it's not going to be corrected (trained) by human knowledge.
GAME OVER

I am convinced, this current AI hype will be one of the largest bubble bursts ever.

If there may be no AI-free computer space (OS/programming) left, I toss all that silicon - I focused all my life on since I was 12 - completely into the garbage, and grow plants, bake bread, go fishing, or whatever, as long as if it has nothing to do with computers at all.
Those AIs can talk to each other as long as I don't have engage that BS.

AI is very useful to increase production.
But I don't need to increase production.
I don't want to increase production.
The problem of our society and our planet suffers from, is there already is way too much production.
Why increase it even more?
What do you wanna buy in a desert?
What we lack is quality.
Quality needs understanding.
AI cannot help you on this.
So, where is the point to have AI to increase production, we don't need, while we need to increase quality, where AI cannot help?


There have to be some AI free space left. Where it's not about to produce as much as possible in as short time as possible, but where a intelligent person simply can breathe.
And as I said above, there is a good chance, that a non-AI-infested OSs in middle term may win.
Because I'm not the only human interested in giving up myself for even more production,
but simply want to breath some high quality air, no matter how much more production of air there could be if we unemploy ourselves.
 
AI exists. People use it because they find it convenient. Using it, for instance, to help you document your project is very convenient. You may be a good programmer but very bad at documenting and also hate doing it. So you use AI to make your life better (you don't have to do something you hate) and everybody else's lives better (they get good documentation, instead of the usual crap you make, because you are bad at it and hate it). Of course, you read the documentation to make sure it doesn't say anything untrue, and you correct it if it does. Here ends the fable of the programmer and the AI.


On a tangent, I disagree with Alain De Vos's table column "Best for." ChatGPT is infinitely better than Claude at writing (and I know a lot about writing) and much better at general reasoning. Claude just has good marketing. They've found a good positioning strategy and exploit it, but it's just marketing. (This is my opinion, of course).
 
Let's go with another example of AI being useful and making the life of the programmer better. You have a long piece of code that is not working and you want to debug it by tracing it using a certain log system so you see in the log how it's working in reality, step by step, but adding the log instructions is painful and boring, and it additionally needs adding preprocessing code so you can compile with or without the traces. So you say to the AI, "Yo, AI, my log system works like this. Take this code and add traces at the beginning and ending of each function, loop, and conditional statement. Do it so they are compiled if the preprocessor variable DEBUG exists. In a function, each trace must include the values of all local variables and parameters." You can easily verify, by comparing, that what the AI has added has not touched your code, and now you can debug it comfortably.
 
Meh, my distrust of AI would have me doubt any of its output for debugging something I wrote; if my code's complex enough I can't figure out what's broken it might benefit from a rewrite :p
AI haters are AI haters. It's almost an irrational stance. I know I will not convince AI haters. I've just given two real examples of why many people considers AI useful for those who have an open mind. If I were to discuss your argument about the rewrite, I'd say that you have probably not written any program of ten of thousands of lines (or more) and you have not programmed professionally, because you cannot happily decide to rewrite and rewrite and rewrite. On many occasions, errors depend of very subtle things and need a painstaking work of debugging. Now, you can anger me by posting something dismissive and full of emojis, as you usually do. It's very effective. I get angry every time.
 
To get back a bit more on topic: I like what Linus says about the issue: those who want to use LLMs to write code they contribute will do so either way. If it is not allowed they will just not admit it.
 
A reasonable way to look at AI tools is similar to what Linus Torvalds has said: AI can be useful as a tool, but it does not replace developer judgment. Generating code is often the easy part; the hard part is understanding the problem, designing the architecture, and maintaining the system over time.

In that sense, AI can help with boilerplate, explanations, or routine tasks, but the developer remains responsible for every line of code. What ultimately matters is not whether a tool was used, but whether the author understands the code and whether it meets the project's standards for correctness, security, and maintainability.
 
Still have to meet a situation where any kind of AI does things better than just functional software. There' s no application. You have a large obscure function for what? There are probably small transparent functions that can do everything you wanted without needing the entire family.
 
To get back a bit more on topic: I like what Linus says about the issue: those who want to use LLMs to write code they contribute will do so either way. If it is not allowed they will just not admit it.
Yes, that's the only constant, people will lie. Until we, as the Human race, don't have truly independent AI (Culture series for example) it's the meat bag that prompted the LLM (or generic assisted development solution) that is responsible for the corresponding code (that means: ownership, reviewing, going to lengths to ensure it does not break other code, maintaining said code, handling bugs, etc).

The linked document is good about explaining all the above. The tone is nice instead of authoritative. One possible issue, regarding maintaining the document is that it focus a bit much on what exists now, so it may become outdated fast.

Also, maybe the way forward for FreeBSD would be a local generic programming llm (i.e. based only on known programming courses, good pratices and so on) that is trained exclusively on FreeBSD code. That should help with the issue of contamination of GPL and other licenses. On the other hand the cost of training is probably out of question.
 
I’ve run into a real issue with generative AI: the models think too much. Sure, the reasoning capacity is better, but the problem is that GPT‑5.4, which I’m using, gets lost in details I didn’t ask for and misses key points. In the end, I’d rather have GPT‑4, even though it’s no longer available...

Right now I’m trying GPT‑OSS‑20B so I can run it locally. OSS‑20B is much faster than GPT‑5, it delivers clear, concise information without fluff, and is on par with Codex, except it can directly access the code repository and suggest PRs. I have sandboxed Codex in a separate repo, so I double check AI PR Codex -> test branch -> main branch, GitHub is my test / development repo and Codeberg is the main / clean code.
 
Code:
# FreeBSD myfbsdhost 15.0-RELEASE-p4 FreeBSD 15.0-RELEASE-p4 releng/15.0-n281010-8ef0ed690df2 GENERIC amd64

# cd /usr/src && /opt/ai/new_digital_god_AI_or_just_tool_of_capitalists_class.sh --check --improve .

...

# shutdown -r now

...

# uname -a
Linux AIEnchancedWithBackdoorsHost 6.999.0-999-genericAI #999~24.04.2-AIbuntu SMP PREEMPT_DYNAMIC

wtf...
 
To get back a bit more on topic: I like what Linus says about the issue: those who want to use LLMs to write code they contribute will do so either way. If it is not allowed they will just not admit it.

Very much. And there is also the question of whether it's a ethical thing to even consider if it should be banned or not. Isn't it similar to, for example, deciding which editors developers are allowed to use? Like, it's none of our business, if it doesn't affect the result (which is a big if). Then again, on a very cynical note, forbidding the use of AI for contributions may be exactly what we need to make sure AI output is indeed indistinguishable of human written code. ^^ In case of ban, people will still use AI and will optimize for non-detection, leading to the kind of code we want - while if we don't ban it, then AI-isms will have to be accepted because "we said we accept AI code".

If think there is also an angle worth considering that we haven't yet in those discussions: is it ok to use CodeGen for problems where the AI is better than us? So far, the questions about using AI where all about speed, how it makes developers more productive, blah. But we're starting to reach a point where AIs are actually better than humans (which should not be a surprise, I guess, since we've seen that times and times again, like when AIs were trained in the past to play Chess, Starcraft, Go, etc and ended up beating champions). I've had sort of an epiphany yesterday night with that. I was refactoring C code managing DND characters sheets and wanted to move most of the logic to SQL. I did hit a wall when it came to inventory, because it was a list of items, some of which could be containers and contained other items (which was just a container_id ref on the contained item), which may be other containers, etc, and I wanted to render that from SQL as a JSON tree, with a container having an array of contained items. I could not do it in pure SQL. I asked Gemini Pro how it would do. And it found a working solution, using an insane preprocessing stack with window functions and recursive CTEs. I could understand all of it. I took its query, read it line by line, having it explaining it all the while, and I ended up grasping it (discovering by the way how awesome recursive CTEs are, and how you can use them to implement map/reduce in SQL). But I knew I would never have been able to write it. Not now, not in ten years (even after seeing the query and understanding it, I won't be able to reproduce it by myself).

Should developers be prevented to leverage that? It's not about speed or laziness, here. It's about going further.

This doesn't change my stance that relying on AI to generate our code is a terrible idea because it prevents us to develop skills and will lead to skill atrophy. Yet, I have consider now that sometimes, it's also about deploying skills we will never have anyway…
 
Also, maybe the way forward for FreeBSD would be a local generic programming llm (i.e. based only on known programming courses, good pratices and so on) that is trained exclusively on FreeBSD code. That should help with the issue of contamination of GPL and other licenses. On the other hand the cost of training is probably out of question.

I actually plan to post-train a local llm (probably nemotron) specifically on the BSDs.

But since I can't train a model from scratch all the GPL code from my base model will still be in there.
 
I actually plan to post-train a local llm (probably nemotron) specifically on the BSDs.

But since I can't train a model from scratch all the GPL code from my base model will still be in there.
I think that as long as it is possible to have an ideia of what are the possible sources of contamination it will be possible to work arround it.
In any case great work @cracauer :)
 
Back
Top