Isn't this because you opened this thread?"Credits: Nicholas Carlini using Claude, Anthropic"
I mean how else would everyone have known?Isn't this because you opened this thread?
LLM's came here to stay and the sooner we accept the new normal the better.did not expect colin percival to be a rube, guess we're going to have to find an operating system that isn't slop friendly.
I don't believe Claude created a virtual machine and recompiled system source and pacckages to find out if a public exploit is working. Show me the list of AI-generated operations to achieve that...It's not clickbait. Here's the info from the
"Credits: Nicholas Carlini using Claude, Anthropic"
You can be completely against AI but also understand that LLM AI can and will find legitimate security issues. The problem with discussing AI is there's too many places to get on or off the train of conversation. If somebody outside of FreeBSD finds bugs with AI tools and then submits issues (not patches) and FreeBSD fixes them without LLMs, that... isn't really a FreeBSD "slop friendly" problem.did not expect colin percival to be a rube, guess we're going to have to find an operating system that isn't slop friendly.
You don't have to. The FreeBSD security team checked it out and decided it was a problem. Is it a huge issue? Maybe not.I don't believe Claude
So, where's the Claude method? What did it do?You can be completely against AI but also understand that LLM AI can and will find legitimate security issues. The problem with discussing AI is there's too many places to get on or off the train of conversation. If somebody outside of FreeBSD finds bugs with AI tools and then submits issues (not patches) and FreeBSD fixes them without LLMs, that... isn't really a FreeBSD "slop friendly" problem.
You don't have to. The FreeBSD security team checked it out and decided it was a problem. Is it a huge issue? Maybe not.
If you don't think AI can do stuff (while also not doing stuff), maybe you are the rube..
we love how nobody can agree on what AI is actually good at. all the slop reviews we've had to deal with were various levels of wrong, but sure, go off.Using AI for review is clearly a better idea than making it write code. In the review case, IA has nothing to invent; it works on a known code base with strict commands or tasks to do.
I've just tooted related with this.we love how nobody can agree on what AI is actually good at. all the slop reviews we've had to deal with were various levels of wrong, but sure, go off.
A distributed/p2p-like LLM system might be something for open source projects. No idea if anything of use already exists but I would donate CPU time and bandwidth to a FreeBSD network if that really adds something to the project.we love how nobody can agree on what AI is actually good at. all the slop reviews we've had to deal with were various levels of wrong, but sure, go off.
this assumes its own conclusion that this thing has any use whatsoever outside of generating spam and misinformation. the tiny percentage of individual "good" uses everyone loves to throw up as arguments all flat-out ignore the fact that the rest of the time, the thing is churning out an endless cascade of garbage.A distributed/p2p-like LLM system might be something for open source projects. No idea if anything of use already exists but I would donate CPU time and bandwidth to a FreeBSD network if that really adds something to the project.
It generated a report that (once verified by the FreeBSD security team) became a security advisory. It doesn't really matter how it did it if we take Colin's point that there will be more of these. Will there also be more slop and junky reporting? I would expect so, but that's not the point. It's not a value judgement on if this is any good or worth the costs. Colin is just saying this is what is going to happen, never mind the "should" part, since he doesn't have any control over that.So, where's the Claude method? What did it do?
The belief that AI is somehow constructive is likely commerce-driven. It apparently built a FreeBSD hacking and testing environment. Can we see it?
I see it as kind of a science experiment: try to describe and automate the approach that someone trying to find an OS vulnerability applies succesfully. No problem if everything is transparent.this assumes its own conclusion that this thing has any use whatsoever outside of generating spam and misinformation. the tiny percentage of individual "good" uses everyone loves to throw up as arguments all flat-out ignore the fact that the rest of the time, the thing is churning out an endless cascade of garbage.
ok but is this what is actually happening, or is are you describing a little fantasy world that you desire? because what it looks like to me is that this entire thing has suborned colin percival into a shill for the claude slopbotI see it as kind of a science experiment: try to describe and automate the approach that someone trying to find an OS vulnerability applies succesfully. No problem if everything is transparent.
I would never do that on any commercial AI platform. They own everything you post.
If an AI generates a NULL pointer check, a murderer generates a NULL pointer check, how does it benefit the project to reject the fix if that NULL pointer would have otherwise crashed the software if it was accessed?except he absolutely does have control over that, he can say "no AI-assisted submissions. if you didn't write the code or do the work yourself i don't accept it". people do in fact have agency and can set boundaries and have principles!
If an AI generates a NULL pointer check, a murderer generates a NULL pointer check, how does it benefit the project to reject the fix if that NULL pointer would have otherwise crashed the software if it was accessed?
there are plenty of reasons to not accept LLM-produced work, such as, oh, you know, all of the negative externalities. just because you have no problem with that doesn't make the inherent underlying issues go away.Yes, AI lets people generate a large number of bug fixes quickly which can overload the processes involved. But for all the correct bug fixes (if people can magically filter them) there is no reason not to accept them.
No? I don't desire any world but I believe that the current general way to find holes in software can be made more efficient with an always available free network made of user resources. E.g. if someone on the network finds an exploit using a yet unknown method, its details can be added to a central knowledge base and might instantly lead to similar findings. This could be done with a somehow intelligent system that saves administrative work.ok but is this what is actually happening, or is are you describing a little fantasy world that you desire? because what it looks like to me is that this entire thing has suborned colin percival into a shill for the claude slopbot
and what part of this requires or justifies the LLM?No? I don't desire any world but I believe that the current general way to find holes in software can be made more efficient with an always available free network made of user resources. E.g. if someone on the network finds an exploit using a yet unknown method. its details can be added ti a central base and might instantly lead to similar findings.
I imagine that the software (and firmware) on your current machine you typed that response on has been written in part by murderers, rapists and tax evaders. You can't avoid that. Hans Reisers code was removed because it was redundant. The fact he was so ubiquitous probably actually kept his code around longer than if he wasn't a murderer. Quite ironic huh!on the one hand this is a facile hypothetical, but also you have the case of Hans Reiser to look at. The murderer's code, which was low-quality and error prone, took way too long to get removed. hth.
You prefer your software to crash? Most people don't. Rejecting a fix because it has an LLM tag-line simply means people will hide that tag-line in the future. And once that happens, it means that LLM/AI is now completely normalized within the development pipeline.there are plenty of reasons to not accept LLM-produced work, such as, oh, you know, all of the negative externalities. just because you have no problem with that doesn't make the inherent underlying issues go away.
i prefer my code to be written and maintained by programmers who understand what they're doing and why. You're arguing for software to be maintained by middle-management LARPers who sift through garbage to pick out what they think might work.You prefer your software to crash? Most people don't. Rejecting a fix because it has a tag-line simply means people will hide that tag-line in the future.
No LLM in particular. The term is biased already. We're not learning human language all the time. That was years ago. Just a user-controled network and learning system with a clear goal: automating things that can be considered a procedure enyway.and what part of this requires or justifies the LLM?