Claude Code cracks FreeBSD within four hours

did not expect colin percival to be a rube, guess we're going to have to find an operating system that isn't slop friendly.
LLM's came here to stay and the sooner we accept the new normal the better.

Right now there are 3 types of people:
1. Those who use LLM's and are transparent about it.
2. Those who use LLM's and don't disclose it.
3. Those who only complain about LLM's.

The last 2 will shrink over time in the medium to long term. The technology has real benefits. There are also obvious downsides but it's been the case with every technology.
 
It's not clickbait. Here's the info from the

"Credits: Nicholas Carlini using Claude, Anthropic"
I don't believe Claude created a virtual machine and recompiled system source and pacckages to find out if a public exploit is working. Show me the list of AI-generated operations to achieve that...
 
did not expect colin percival to be a rube, guess we're going to have to find an operating system that isn't slop friendly.
You can be completely against AI but also understand that LLM AI can and will find legitimate security issues. The problem with discussing AI is there's too many places to get on or off the train of conversation. If somebody outside of FreeBSD finds bugs with AI tools and then submits issues (not patches) and FreeBSD fixes them without LLMs, that... isn't really a FreeBSD "slop friendly" problem.

I don't believe Claude
You don't have to. The FreeBSD security team checked it out and decided it was a problem. Is it a huge issue? Maybe not.

If you don't think AI can do stuff (while also not doing stuff), maybe you are the rube..
 
You can be completely against AI but also understand that LLM AI can and will find legitimate security issues. The problem with discussing AI is there's too many places to get on or off the train of conversation. If somebody outside of FreeBSD finds bugs with AI tools and then submits issues (not patches) and FreeBSD fixes them without LLMs, that... isn't really a FreeBSD "slop friendly" problem.


You don't have to. The FreeBSD security team checked it out and decided it was a problem. Is it a huge issue? Maybe not.

If you don't think AI can do stuff (while also not doing stuff), maybe you are the rube..
So, where's the Claude method? What did it do?
The belief that AI is somehow constructive is likely commerce-driven. It apparently built a FreeBSD hacking and testing environment. Can we see it?
 
Using AI for review is clearly a better idea than making it write code. In the review case, IA has nothing to invent; it works on a known code base with strict commands or tasks to do.
 
Using AI for review is clearly a better idea than making it write code. In the review case, IA has nothing to invent; it works on a known code base with strict commands or tasks to do.
we love how nobody can agree on what AI is actually good at. all the slop reviews we've had to deal with were various levels of wrong, but sure, go off.
 
atax1a He also hangs out on reddit. Friend of mine asked him why he was there and not on these forums to which he replied it's cause he hangs out on other subreddits which, to me, is disappointing to hear about him.
 
we love how nobody can agree on what AI is actually good at. all the slop reviews we've had to deal with were various levels of wrong, but sure, go off.
A distributed/p2p-like LLM system might be something for open source projects. No idea if anything of use already exists but I would donate CPU time and bandwidth to a FreeBSD network if that really adds something to the project.
 
A distributed/p2p-like LLM system might be something for open source projects. No idea if anything of use already exists but I would donate CPU time and bandwidth to a FreeBSD network if that really adds something to the project.
this assumes its own conclusion that this thing has any use whatsoever outside of generating spam and misinformation. the tiny percentage of individual "good" uses everyone loves to throw up as arguments all flat-out ignore the fact that the rest of the time, the thing is churning out an endless cascade of garbage.
 
So, where's the Claude method? What did it do?
The belief that AI is somehow constructive is likely commerce-driven. It apparently built a FreeBSD hacking and testing environment. Can we see it?
It generated a report that (once verified by the FreeBSD security team) became a security advisory. It doesn't really matter how it did it if we take Colin's point that there will be more of these. Will there also be more slop and junky reporting? I would expect so, but that's not the point. It's not a value judgement on if this is any good or worth the costs. Colin is just saying this is what is going to happen, never mind the "should" part, since he doesn't have any control over that.

TBH, him posting on X is more of a bother to me than this "AI take."
 
except he absolutely does have control over that, he can say "no AI-assisted submissions. if you didn't write the code or do the work yourself i don't accept it". people do in fact have agency and can set boundaries and have principles!

and (just to stem this argument before it starts) sure, people will lie about using AI, but then what you've done is found a fucking liar in your midst, and can remove them. hth
 
this assumes its own conclusion that this thing has any use whatsoever outside of generating spam and misinformation. the tiny percentage of individual "good" uses everyone loves to throw up as arguments all flat-out ignore the fact that the rest of the time, the thing is churning out an endless cascade of garbage.
I see it as kind of a science experiment: try to describe and automate the approach that someone trying to find an OS vulnerability applies succesfully. No problem if everything is transparent.
I would never do that on any commercial AI platform. They own everything you post.
 
I see it as kind of a science experiment: try to describe and automate the approach that someone trying to find an OS vulnerability applies succesfully. No problem if everything is transparent.
I would never do that on any commercial AI platform. They own everything you post.
ok but is this what is actually happening, or is are you describing a little fantasy world that you desire? because what it looks like to me is that this entire thing has suborned colin percival into a shill for the claude slopbot
 
except he absolutely does have control over that, he can say "no AI-assisted submissions. if you didn't write the code or do the work yourself i don't accept it". people do in fact have agency and can set boundaries and have principles!
If an AI generates a NULL pointer check, a murderer generates a NULL pointer check, how does it benefit the project to reject the fix if that NULL pointer would have otherwise crashed the software if it was accessed?

Yes, AI lets people generate a large number of (false) bug fixes quickly which can overload the processes involved. But for all the correct bug fixes (if people can magically filter them) there is no reason not to accept them.

The reason being that static analyzers have been finding bugs for years. The LLM umbrella of algorithms are simply good enough now to provide a new approach to static analysis.

This "no AI m'kay!" stance has grown out of the "art" industry due to fear. Thats fine; software is a very different beast. Code is essentially worthless.
 
If an AI generates a NULL pointer check, a murderer generates a NULL pointer check, how does it benefit the project to reject the fix if that NULL pointer would have otherwise crashed the software if it was accessed?

on the one hand this is a facile hypothetical, but also you have the case of Hans Reiser to look at. The murderer's code, which was low-quality and error prone, took way too long to get removed. hth.

Yes, AI lets people generate a large number of bug fixes quickly which can overload the processes involved. But for all the correct bug fixes (if people can magically filter them) there is no reason not to accept them.
there are plenty of reasons to not accept LLM-produced work, such as, oh, you know, all of the negative externalities. just because you have no problem with that doesn't make the inherent underlying issues go away.
 
ok but is this what is actually happening, or is are you describing a little fantasy world that you desire? because what it looks like to me is that this entire thing has suborned colin percival into a shill for the claude slopbot
No? I don't desire any world but I believe that the current general way to find holes in software can be made more efficient with an always available free network made of user resources. E.g. if someone on the network finds an exploit using a yet unknown method, its details can be added to a central knowledge base and might instantly lead to similar findings. This could be done with a somehow intelligent system that saves administrative work.
 
No? I don't desire any world but I believe that the current general way to find holes in software can be made more efficient with an always available free network made of user resources. E.g. if someone on the network finds an exploit using a yet unknown method. its details can be added ti a central base and might instantly lead to similar findings.
and what part of this requires or justifies the LLM?
 
on the one hand this is a facile hypothetical, but also you have the case of Hans Reiser to look at. The murderer's code, which was low-quality and error prone, took way too long to get removed. hth.
I imagine that the software (and firmware) on your current machine you typed that response on has been written in part by murderers, rapists and tax evaders. You can't avoid that. Hans Reisers code was removed because it was redundant. The fact he was so ubiquitous probably actually kept his code around longer than if he wasn't a murderer. Quite ironic huh!

there are plenty of reasons to not accept LLM-produced work, such as, oh, you know, all of the negative externalities. just because you have no problem with that doesn't make the inherent underlying issues go away.
You prefer your software to crash? Most people don't. Rejecting a fix because it has an LLM tag-line simply means people will hide that tag-line in the future. And once that happens, it means that LLM/AI is now completely normalized within the development pipeline.
 
You prefer your software to crash? Most people don't. Rejecting a fix because it has a tag-line simply means people will hide that tag-line in the future.
i prefer my code to be written and maintained by programmers who understand what they're doing and why. You're arguing for software to be maintained by middle-management LARPers who sift through garbage to pick out what they think might work.
 
and what part of this requires or justifies the LLM?
No LLM in particular. The term is biased already. We're not learning human language all the time. That was years ago. Just a user-controled network and learning system with a clear goal: automating things that can be considered a procedure enyway.
 
Back
Top