Claude Code cracks FreeBSD within four hours

i prefer my code to be written and maintained by programmers who understand what they're doing and why.
Static analyzers have been informing skilled programmers for decades. As mentioned, LLMs now improve upon that.
You're arguing for software to be maintained by middle-management LARPers who sift through garbage to pick out what they think might work.
This is not what those LLM fixes were. They were small improvements in the codebase to prevent errors. Small and focused, just like a regular contribution to a large project.

Middle-management larpers have always had "outsourcing" and "no code" solutions to power their fantasies.

"No AI" from the artist community and "AI everything" from the tech-bro community are equally wrong. Compromises are made and the whole thing becomes as normalized as a microwave.
 
No LLM in particular. The term is biased already. We're not learning human language all the time. That was years ago. Just a user-controled network and learning system with a clear goal: automating things that can be considered a procedure enyway.
so you have... no coherent argument on the subject?
 
Static analyzers have been informing skilled programmers for decades. As mentioned, LLMs now improve upon that.

This is not what those LLM fixes were. They were small improvements in the codebase to prevent errors.

"No AI" from the artist community and "AI everything" from the tech-bro community are equally wrong. Compromises are made and the whole thing becomes as normalized as a microwave.
the fixes and "good parts" that you're harping on do not come for free. they are a byproduct of the slop and you cannot have one without the other. ignore that at your own peril.
 
except he absolutely does have control over that, he can say "no AI-assisted submissions. if you didn't write the code or do the work yourself i don't accept it". people do in fact have agency and can set boundaries and have principles!

Ah, slight problem with this tack in re security issues, however: It doesn't make the security issue go away. Think of it similar to companies that require obnoxious disclosure requirements who forget that people can just zero-day the thing and forego the bounty.
 
the fixes and "good parts" that you're harping on do not come for free.
The bits that were accepted? They came for free whether it was from an LLM or traditional static analyzer.
they are a byproduct of the slop and you cannot have one without the other. ignore that at your own peril.
The slop in terms of noise and false positives is a very real problem. I agree here (I weaponise it to avoid parking tickets by tying the entire appeal system up for example). Low-effort places like GitHub will need to have processes in place to avoid this.

But in the contributions being made to operating systems (a good example sharing a similar concern to you is here on the OpenBSD mailing lists), the accepted code is valid, regardless of where it came from. Rejecting it would not be a benefit to the users.
 
Ah, slight problem with this tack in re security issues, however: It doesn't make the security issue go away. Think of it similar to companies that require obnoxious disclosure requirements who forget that people can just zero-day the thing and forego the bounty.
sure, that sucks, but the solution cannot be "adopt the slop and accelerate our own burnout by generating tons of garbage"
 
so you have... no coherent argument on the subject?
AI didn't crack FreeBSD. This guy works at Anthropic. Anyone working at any AI company who finds a problem in public software will claim it was with AI brand(c) support. It's nonsense.
Is that what you want to hear?
 
sure, that sucks, but the solution cannot be "adopt the slop and accelerate our own burnout by generating tons of garbage"
It's not "adopting" it to process submitted security vulnerability reports? That's just "dealing" with it, which is basically what he's warning about having to do. I don't think he's saying the security team should generate their own slop.
 
But in the contributions being made to operating systems (a good example sharing a similar concern to you is here on the OpenBSD mailing lists), the accepted code is valid, regardless of where it came from. Rejecting it would not be a benefit to the users.
Some people at work have started using Cursor, and have sent me analyses made by same. I can tell that what the Cursor guys have done is hook up a chatbot to a static analyzer, because Intellij's static analyzer found some of the same problems.

The problems it reported ranged from a good catch of a previously unnoticed bug to a complete howler, with a few in between. The bug fix it proposed for the bug it caught was naive and incomplete, though, so I didn't use it.

What it did for one of the problems it identified that was already reported was provide a far better explanation of what was wrong, and how to fix it. Intellij' warning just reported that there were "no subscribers to this publisher" which is on the cryptic side.

The howler was that it blew a fit over the fact that we didn't guard against not being able to connect to a critical service. Fact is, if that service is down, nothing is going to work anyways. Having some zombie process hanging around stupidly doing nothing besides useless logging is worse than failing fast in this circumstance. This is exactly the kind of judgement calls AI simply can't make because it doesn't actually think in any meaningful way.
 
except he absolutely does have control over that, he can say "no AI-assisted submissions. if you didn't write the code or do the work yourself i don't accept it". people do in fact have agency and can set boundaries and have principles!

and (just to stem this argument before it starts) sure, people will lie about using AI, but then what you've done is found a fucking liar in your midst, and can remove them. hth
Code must have been reviewed by an human so it can be copyrighted, otherwise it belongs to the public domain.

As long as the code is reviewed by humans, it's fine.
 
Back
Top