Claude Code cracks FreeBSD within four hours

I found NFSv4 and Kerberos way too complex with too many moving parts to possibly be secure. Looks like my assumptions have been verified when even AI can find a flaw:

https://www.forbes.com/sites/amirhu...-of-the-worlds-most-secure-operating-systems/

NTFv4 is not enabled by default, so many servers are fine. Though nothing like NFS should be exposed publicly to the internet.

Note: I believe the CVE was public, way before the AI was tested against it. This means that the info was already in the AI's training data.
 
FYI: FreeBSD-SA-26:08.rpcsec_gss

Quoting from my toot on Mastodon:
IMHO, introducing AI-generated codes is too stupid at least for now.
Even putting so-called "AI-slop" aside, AI-generated codes are legally unclear with the copyright "of the data sources used for learning". Who takes the responsibility? International laws first!
And AI-generated codes needs to be reviewed by "natural human who can take responsibility (better if skilled, talented and experienced)" before submitting.
And merge related (depending on each other) portions into single PR not to make human reviewers / committers too tired. It should be clearly the responsibility of natural human submitter to understand what the diff is going to do.

OTOH, reviews by AI before submitting a human-written codes could (hopefully) help polishing up the quality. I think this is the relatively safe (in copyrights management) usage of AI for now.
 
We can not ignore stuff like IPX or NFS contains "old code".
Once i tried to build freebsd kernel/world without NFS , it would let not me do it. It was in the build system. [ Must try again to be able to be exact]
 
https://neuromatch.social/@jonny/116328988554490524 like, the only reason any of this works is due to a dogshit tornado of crapped-out garbage. if you believe any of this is good you're a rube.
Determining the output from AIs as garbages or not should be done locally at submitters' side, and anything well understood, considered (by the natural human submitter) correct and useful and integrated to be meaningful alone should be submitted upstream not to bother upstream reviewers.
 
what you're saying is "the future of programming is sifting through the raw sewage output of slop machines in order to pull out the tasty corn kernels embedded within". cringe.
 
And at the same time, ChatGPT can’t even deduplicate a list of duplicates. It’s really great that this is now supposed to be used in autonomous weapon systems.
Is it true? I personally wouldn't expect that computers of any weapon system have non-transparent decision-making components.
I think this is part of the AI glorification that's starting to get desperate. They are in their own conflict: if you generate valuable information and give it away, there's minimal profit and others can copy it, but if you fake the value of information everybody will notice that your results are useless and leave.
 
It's true. I gave it a file with C++ includes of which some were duplicates. The resulting file missed a bunch completely.

And for the weapons: The pentagon asked Anthropic first but they refused:

In a statement published on its website, QuitGPT says: "On February 27, ChatGPT competitor Anthropic refused to give the Pentagon unrestricted access to its AI for mass surveillance of Americans or producing AI weapons that kill without human oversight."

 
that is because it does not, in any sense, understand what you are "asking" it. the only thing it does is produce a plausibly-shaped response to the input, and everything else it "does" is postprocessing layers on top of that. you remember that quote attributed to charles babbage about how people would ask him if putting wrong figures into the machine would produce correct answers, and he couldn't understand how people could have such a confusion of ideas?
 
unreliable narrator paired with lying chatbot makes extreme claims

yawn
What scares me more than anything else is that these bots don't necessarily just admit when they don't know or have a wrong answer. I've seen chatgpt get a wrong answer, then refuse to admit that it was wrong, then you give it the correct form to work with at which point it claims that it's nonstandard.

I knew what the right answer was supposed to be, so it wasn't that much of an issue, but I have to wonder about all the people out there that wouldn't know things and just use the broken output, or who are given less time than needed to do the task, so they don't check, because the bot was supposed to save time, even though often times checking takes more time than just doing it correctly.
 
looks like the victim system needs to be setup with secure GSSAPI for this bug to be activated .
My Kernel was updated 3 days ago so issue was already patched on my laptop.
 
Back
Top