Will FreeBSD adopt a No-AI policy or such?

What's going on with AI is legit terrifying:





And now Linux seems to be going all-in: https://www.neowin.net/news/linus-t...eled-code-surges-as-the-new-normal-for-linux/

Any news on FreeBSD resisting this scary route?
I like in the first link you've shared the video in the thread showing "AI" making verbatim copies of github code and removing the license. Such a leap forward for mechanized plagiarism.
 
Yeah, it's looking bad. And an LLM code submission ban may not prevent all bad actors, but neither does a ban on manually stealing GPL code (that I assume is already in place). A ban would help reduce usage and give clear guidelines to the good actors and shift responsibility to the bad actors. You can never stop all bad actors.
 
In my humble opinion, to allow CODES generated by AI/LLM, AI/LLM SHALL have the feature avoiding pollutions by non-BSD-compatibully (for bringing in) licensed codes. This is MANDATORY to avoid silly legal issues.

Finding issues in FreeBSD codes using AI/LLM would be OK, but automated reporting SHALL be prohibited.

Natural human reporter need to triage, review and make sure what would be going to be reported, dropping nonsense (i.e., issues in #if 0'ed codes) or pointless (false positives by malfunction of the AI/LLM used) reports generated, and code by themselves (by natural human) for fixes if it's really worth doing. Using AI/LLM to review the writtend codes before submitting would be OK.
 
The worst part of the issue is the systems that will pay a cost for using licensed code without attribution or license information are projects with source available for review. Closed sourced systems could take decades to figure out the infringement in the code.

Edit: especially if the same tools are training on closed source software.
 
Actually, the data seems to support the opposite thesis: https://futurism.com/ai-coding-programmers-reality
Not particularly.

Though what I meant earlier was that the very fact that people are using AI for these contributions is enough evidence that it improves their personal workflow. There is no reason to believe they are using these tools because they make the process more difficult for them... ;)

It reminds me of the 2009 paper "The Effectiveness of Automated Static Analysis Tools for Fault Detection and Refactoring Prediction". That 3% is exactly why we kept using static analysis. Its still certainly worth the 97% false positives. Trying to undermine i.e splint with this data wasn't really going to work.

The problem seems to be LLM code submissions. A bunch of projects banned them now.
And a bunch more projects are explicitly allowing them. Developers currently announce if they used AI because its still a "novelty" but I believe most developers stopped announcing what static analyser they passed the code through back in the 90s. In other words, preventing it simply isn't actionable (for better or for worse).

As I mentioned, I believe the compromise will be more preventing spam in the issue tracking systems rather than relating to the codebase itself.
 
I don't know what the official policy is but I did see a couple of commits "Co-Authored-By: Claude Sonnet 4.6". Looking at the diff, it is just a few small tweaks. See commits ddf19dcbe1 and the one before that. Good or bad?
Is that not the line added by vscode?
 
I'm seeing commits tagged with `Assisted-By: Claude Opus 4.6`.
It's automatically added by even claude code. I had a problem with setting up a token with github, and it tainted my code with that message. It's the kind of intrusion that made me stop using it. And Claude somehow messed up, because I get no credits for my pushes or commits, but "it's amazing how much we can accomplish when we don't care about credits".

My point is that you might get that message even if none of your code is AI generated, but you used AI for something small like trying to fix a git permission.
 
I can't believe I live in the same state as this university and that my son, the fruit of my loins, went there. I am so embarrassed.

Master AI automation with agentic AI in just 5 weeks. Learn hands-on from industry experts and take your skills to the next level
a.jpg
 
> Frequent urge to teach coworkers automations.
?!
"Hey, John want to see the cool new trick I just learned?"
"nope. go 'way."
"It's really cool?"
"nope."
"but it will save you time...?"

...
 
Back
Top