AI finds thousands of zero-day exploits... including in FreeBSD.

Anthropic Claude Mythos...


Quote: "Mythos Preview, Anthropic claimed, has already discovered thousands of high-severity zero-day vulnerabilities in every major operating system and web browser. Some of these include a now-patched 27-year-old bug in OpenBSD, a 16-year-old flaw in FFmpeg, and a memory-corrupting vulnerability in a memory-safe virtual machine monitor."
 
counterpoint: this is marketing bullshit by a company that lies to prop up its value, and acts like a protection racket


they are drowning us in slop reports, and then trying to sell us slop-based solutions to manage all of it. shit behavior from garbage capitalists.
 
Yes I just realised I may have been guilty of re-posting the same thing... although that was specifically about freebsd.
I thought the articles about mythos were interesting anyway. It even made the BBC evening news here, which is pretty unusual...
 
yes, i'm sure if you sift through enough of the sewage, you'll find one or two pieces of corn. nothing about this is healthy or sustainable or worthwhile.

I'm talking about running CC on my own code by myself. There was less than 50% BS in there so far.

I have no experience being the target of third parties doing it on my code.
 
There should be a notable increase in compromised systems worldwide due to Claude hacking business. Is there any statisctical graph about it?

I don't think it works like this. Software exploitation is a method that requires human reasoning naturally. Any hole that can be found with software only can't be impressive. The knowledge to find it already existed and can be found with logic.
 
Correct me if I'm wrong, but Anthropic doesn't publish the holes right now, and the reports are from a LLM not even accessible by the public yet?
 
Correct me if I'm wrong, but Anthropic doesn't publish the holes right now, and the reports are from a LLM not even accessible by the public yet?
They are just bug-hunting for p&r? It wouldn't surprise me. Find public software and run professional security audits
 
"i have a scary bogeyman of an AI that will end computer security!!" okay, can we see it? "no".

again, this is just corporate asswipes trying to force their way in to make you pay attention to their slop. it's a show of force by technocratic fascists.
 
Correct me if I'm wrong, but Anthropic doesn't publish the holes right now, and the reports are from a LLM not even accessible by the public yet?
yes, but "The model will be used by a small set of organizations, including Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks, along with Anthropic, to secure critical software."

So before release it, they will patch important software infrastructure.
 
From what I heard I'm not sure they intend ever to release it. Instead they will sell it to industry partner companies to identify and fix exploits, but will not release it for general use. That was the gist of the news report I heard earlier. They consider it too dangerous to put it out on general release.

I'm sure the opposition is working on the same kind of thing...
 
Finding bugs or exploits in software seems an extremely simple task for a very sophisticated contrivance whose specialty is recognizing patterns. I'm not impressed at all. They are just feeding their little machine the baby food it knows how to chew well, so it can shine. Oh, look how well our little machine chews its baby food! It's just more Anthropic being good at niche marketing. Nothing new.

Also, I wonder where the escalation in naming will lead. What comes after "Mythos"? "Deity," perhaps?

I'm anti-talking-about-AI now. I'm very fed up with the thing. There are other things happening in the world. (Yes, yes, yes, I'm being contradictory, but how do you protest protests?).

And now, sports.
 
This would be relatively "safe" use-case of AI / LLM, compared with using codes generated by AI / LLM that has possible fatal copyright issues in the future.

But special attentions is mandatory for false-positives.
There should be warned dangerous codes near the hardware level that should be unavoidable to make some devices to just work.
 
Correct me if I'm wrong, but Anthropic doesn't publish the holes right now, and the reports are from a LLM not even accessible by the public yet?
That's correct.
worthwhile
Slight problem, FreeBSD doesn't get to decide if it is "worthwhile participating." The choice is participate now, or let somebody else use the tool on FreeBSD later and "participate" by having zero-days drop like rain.
 
It's obvious you don't trust your security to anyone else who has no stake in your security. Which is why the only true way to not have exploits is to write your own OS. Which is a formidable task.
 
Another couple of articles with a bit more info

"A flaw in OpenBSD's TCP SACK implementation dating back to 1999. A signed integer overflow allowing remote denial-of-service. The kind of bug that survived hundreds of reviews, dozens of major releases, thousands of pairs of eyes. Still there.

A defect in FFmpeg's H.264 decoder, 16 years old. A sentinel collision causing an out-of-bounds write. Automated tools never caught it. Not for lack of trying: 5 million fuzz tests. Zero results. Mythos found it by analyzing the code directly."

Although it doesn't say so, what would have impressed me would be if it only found ONE bug in openbsd... we don't know the full number, of course.

"The model chained multiple Linux kernel vulnerabilities to build a full privilege escalation path, defeating hardened protections: stack canaries, KASLR, W^X. Not an isolated flaw. A working attack chain.

On FreeBSD, Mythos autonomously identified and exploited a 17-year-old remote code execution vulnerability in the NFS service. Unauthenticated root access. Fully autonomous. No human steering.

And then there's this: against Firefox 147, the model successfully developed JavaScript shell exploits 181 times. Claude Opus 4.6, the previous best model? Twice."

Browsers look to be much more exploitable than operating systems, as expected.
 
Slight problem, FreeBSD doesn't get to decide if it is "worthwhile participating." The choice is participate now, or let somebody else use the tool on FreeBSD later and "participate" by having zero-days drop like rain.
Or... someone else, somewhere else, develops a similar AI with similar capabilities, which you have no option of participating in, and that is going to be used against you. If anthropic can do this... others can, or will soon have that capability. "What one fool can do, another can".

What would be somewhat annoying is if they want money to tell you where the bugs are... given that they were handed the code that they analysed for free in the first place. Although I suppose there is a certain cost to building and running the model, however, if they are skimming a big margin from it, that doesn't sound very attractive to me, more like a protection racket, as ataxia said.
 
Perhaps, although the majority of apps don't contain a language interpreter. Javascript was the primary tool exploited, according to that article. I bet there are similar problems with things like pdfs and spreadsheets.
 
Well... let's see if MS submits the windows source code for the AI's evaluation. Although if they do, it's going to be NDA'd to high heaven, so we will probably never know.

And for all we know, outfits like the NSA, and their foreign equivalents, may have already had this kind of capability for some time, and kept quiet about it. Usually when something makes it out into civvie street, it's already been used for some time by the security services.
 
Back
Top