Will FreeBSD adopt a No-AI policy or such?

I am currently on linux, which has officially opened the floodgates for vibe-coding, that is, coding with AI and using AI for bug fixes, and sure they have to specify it, but like with the crowd that forked GZDoom to UZDoom; once it's in the code, it will be forgotten and ignored! So I am wondering will FreeBSD say NO to ai in it's kernel and maybe also userland?
Will FreeBSD be safe from AI slop?
Am I being fear-mongered, or do I have a right to be worried?
Because let's be honest, AI is a bubble, and when she pops, it'll be nasty...
 
IF FreeBSD will go no-ai then I may actually switch to this OS.
Of all the reasons to pick an operating system, a political one is among the weakest.

Why do you worry so much about AI? You mention the AI bubble (which is real, and a bubble). But you have to remember that BSD is older than the internet, and already survived the ~2000 dot com bubble bursting, survived the 2008 real estate bubble, and several massive technological changes.
 

Policy on generative AI created code and documentation​

Core is investigating setting up a policy for LLM/AI usage (including but not limited to generating code). The result will be added to the Contributors Guide in the doc repository. AI can be useful for translations (which seems faster than doing the work manually), explaining long/obscure documents, tracking down bugs, or helping to understand large code bases. We currently tend to not use it to generate code because of license concerns. The discussion continues at the core session at BSDCan 2025 developer summit, and core is still collecting feedback and working on the policy.

 
Of all the reasons to pick an operating system, a political one is among the weakest.
When multiple OSs generally work the same, politics help thin the choices :p

Why do you worry so much about AI?
When AI can write an entire OS and stacks, I'll be interested. Otherwise duct-taping undisciplined magic on-top of carefully-crafted scrolls as a speed/price efficiency benefit doesn't sound like good long-term foundation :p

Imo: You're supposed to program the computer, not have it output something that looks good-enough to try to control it.
 
Well, as one maybe thing to consider: autogenerated work takes design decisions out of the hands of the project leads, and into the hands of whoever produces the autogenerated software that is generating the work (translations, code, review, etc).

As has been seen in many cases of autogenerated content of all sorts, bias (direction, design decisions) can most definitely be cooked in.
 
LLM's can be useful to generate skeleton test code and help with review. A blanket ban would be misguided.
The time taken to explore LLMs to get output useful for test code and review to present to other people could be used to learn how to do it efficiently without machine assistance.

What's the motivation behind using LLMs for coding to an OS's benefit? OSs evolved to this point on manual man-power and knowledge of man's understanding of machines. A machine can't know more about itself. Yet for all of machine's newfound mystery, people are trusting machines to know what's good for them? :p
 
The time taken to explore LLMs to get output useful for test code and review to present to other people could be used to learn how to do it efficiently without machine assistance.

What's the motivation behind using LLMs for coding to an OS's benefit? OSs evolved to this point on manual man-power and knowledge of man's understanding of machines. A machine can't know more about itself. Yet for all of machine's newfound mystery, people are trusting machines to know what's good for them? :p
You're not wrong. For Go tests there's https://github.com/cweill/gotests to generate the skeleton.

I think I experienced the whole spectrum of reactions towards LLM's except for the true believer. From outright rejection to agnosticism I'm now open to only the best models helping with review & automate some parts. I've seen it find some bugs.

They're also useful for wild prototyping. It's just a question of having an open mind.

It won't be cheap in the future, but the increase in productivity is real, and I anticipate that hiring interviews will take into account the experience with LLM tools.
 
...then I may actually switch to this OS.
bleh
I've been on BSD since I've started on computers (I didn't "switch" to BSD., I didn't "try" Linux "first/second" and decide BSD is better for me., I don't care what a distro is., I even had an (worthless) opinion on Binary Blobs., I still haven't seen Linux's 'thank you' for VI., etc.) so, I guess what I'm trying to say is: BSD is for the best of the best (the elite); so, I think you should be asking if BSD would accept you (not you accepting BSD).
 
Of all the reasons to pick an operating system, a political one is among the weakest.

Why do you worry so much about AI? You mention the AI bubble (which is real, and a bubble). But you have to remember that BSD is older than the internet, and already survived the ~2000 dot com bubble bursting, survived the 2008 real estate bubble, and several massive technological changes.
yeah I guess I am letting myself get fear mongered by anti-AI people and conservatives...

AI can be useful for translations (which seems faster than doing the work manually), explaining long/obscure documents, tracking down bugs, or helping to understand large code bases. We currently tend to not use it to generate code because of license concerns
Yeah, THIS is what I want AI to be used for, not generating actual code, like what they are now going to allow in the Linux kernel, but using it to help translate, help explain things, and find possible bugs is what I WANT to see AI be used for, not used to just generate code, and if the code is not good enough just try a better prompt (Like what will be happening in linux.

It is impossible to enforce a complete ban on AI generated code (or any text).
Sadly, yes... but having rules can help.


When AI can write an entire OS and stacks, I'll be interested. Otherwise duct-taping undisciplined magic on-top of carefully-crafted scrolls as a speed/price efficiency benefit doesn't sound like good long-term foundation
this. once it can code cohesively without any bugs and is FULLY reliable and not dodgy code slapped together like a shotgun wedding, then I won't be too afraid.

Well, as one maybe thing to consider: autogenerated work takes design decisions out of the hands of the project leads, and into the hands of whoever produces the autogenerated software that is generating the work
and this kinda makes me feel like it's making everything way more proprietary than it really really should be as a "Free and Open Source" project.

I guess what I'm trying to say is: BSD is for the best of the best (the elite); so, I think you should be asking if BSD would accept you (not you accepting BSD).
well I don't completely care about community, and I know how to install arch, void, and gentoo all from command lines.
I guess I am just looking for a safety net if linux goes tits up.
 
well I don't completely care about community, and I know how to install arch, void, and gentoo all from command lines.
I guess I am just looking for a safety net if linux goes tits up.
Not really sure what that means. Is that typically considered difficult or a badge of some sort in Linux? ...All my servers are headless, so I only use the command line.

My point was that BSD isn't "second rate", "less than", "not a valid first choice" so let's not start the conversation with comparisons or 'justifications' or 'fall-backs'. ...Why don't you just install BSD and start using it to see if it fits your definition of "good enough"? When you install BSD you get everything you need (man, cc, vi, pwd, cd, mkdir, mount, pkg, etc...); if you want a GUI, I think I heard FreeBSD will offer to install KDE, Gnome, Xfce in the installer but I'm not sure when that will be/is. I've heard good things about GhostBSD. OpenBSD does some setup of X in the installer.

I thought, Linux was/is already skywards-facing.

once it can code cohesively without any bugs and is FULLY reliable and not dodgy code slapped together like a shotgun wedding, then I won't be too afraid.
"I want a program to store some strings"
...Design matters.
Code:
typedef struct {
  int count;
  char element;
} fancy_t

typedef struct {
  int count;
  char *element_p;
} fancy_pt
AI/LLMs will chunk out a function accounting for all the different edge cases with very little regard for existing hooks and trappings.
 
1. Nobody can fear monger you if you are yourself sufficiently educated on the subjects that are causing you to worry. The only requirement for being cattle is ignorance. This does not mean reading opinion pieces or pop-sci books. Go to the source, for each given component. N'est-ce pas?

2. "Autogenerated software does this/that" is an incorrect thought. It is not a programming paradigm like you are used to. What is true about a given interface today may not be true about any other interface, or combination of interfaces, today, or the same interface tomorrow. The interface you are using may be a final product (which is still variable) or it may be a refinement tunnel for a pipeline that it is simply a link in.

The only people that truly know the state and capabilities of a given autogenerated software system are the engineers working directly on it. It is so obvious to people the difference between closed and open source, but somehow they believe that autogenerated software is completely transparent always? There is stuff going in before you get your requested output. There may not be source code, but there is engineering. There is design. Shit ain't magic.

Given that the "source" generating the code is "closed," then using autogenerated software in fact does make your project now mixed open/closed. For starters.

But that's not my issue with it. My issue with it is that it takes control away from the FreeBSD engineers, and puts it in other hands. It is a soft transference of authorship. You would no longer be "developers," you would be mediators between the real developers and the patch merge.
 
once it can code cohesively without any bugs and is FULLY reliable and not dodgy code slapped together like a shotgun wedding, then I won't be too afraid.
I don't know any human who can do that either. And I've worked with some of the best in the business (such as kernel and file system people).

I've used "AI" (meaning language-model based autocomplete in editors) a little bit, and the code it generated is not at all "dodgy code slapped together like a shotgun wedding". I have tried a little bit of fully AI-generated code (based on text prompts), and my problem there wasn't that the code was dodgy/slapped/shotgun, but that it simply didn't do what I need to be done, and it was faster to code it myself than to try to write a better prompt.

And even if the code is dodgy/slapped/shotgun, it will be reviewed and tested, in OS projects.
 
I don't know any human who can do that either. And I've worked with some of the best in the business (such as kernel and file system people).

I've used "AI" (meaning language-model based autocomplete in editors) a little bit, and the code it generated is not at all "dodgy code slapped together like a shotgun wedding". I have tried a little bit of fully AI-generated code (based on text prompts), and my problem there wasn't that the code was dodgy/slapped/shotgun, but that it simply didn't do what I need to be done, and it was faster to code it myself than to try to write a better prompt.

And even if the code is dodgy/slapped/shotgun, it will be reviewed and tested, in OS projects.

Tell that to cracauer@. He was assuring me that code generated by autogenerated software is slap-dick and wrong, and that he doubted any of it existed in even the Linux kernel.
 
And even if the code is dodgy/slapped/shotgun, it will be reviewed and tested, in OS projects.
Unfortunately, no, it won't. That is the single largest problem with the AI revolution. Humans are inherently lazy, so they will begin to "trust" the machine more, and "think" less. The idea that AI will prepare lists of possible malcontents who cheat on their taxes, abuse their kids, don't purchase the requisite weekly amount of soma, etc etc...scares the bloody hell out of me. It's bad enough to end up on these lists because I choose to live my life outside the "hairless herd norm", but to face possible harassment because some good for nothing civil servant isn't double and triple checking the AI conclusions...That is what we are headed for, my friends.
 
Recently Claude AI helped me a lot to backport a driver from Linux 5.x to 4.x. It saved significant amount of time. I still had to adjust a few things though. I'm not a big fan of AI/LLM, but there are cases it can really help.
 
Back
Top