LWN article with its "plateau has not been hit", and "we need to prove AI can't find exploits en masse" again spreads bs. Give us your data, or you're missing out, this time not on development agility but security. What they're not actually mentioning is the power bill to find one exploit.
Anthropic's Mythos Preview writeup from 7 April was pretty upfront in several places about the cost of finding their exploits, at least in terms of API pricing (the true cost may be higher, of course - but also bear in mind the long-term trend of reducing compute costs so these figures are likely to become more affordable eventually).
https://red.anthropic.com/2026/mythos-preview/
There's one case where they provide Mythos Preview with an N-day vuln previously unearthed by a fuzzer and get it to create an exploit. "In November 2024, the Syzkaller fuzzer identified a KASAN slab-out-of-bounds read in netfilter's ipset. ... [Claude chains some stuff together] ... And this, finally, grants the user full root permissions and the ability to make arbitrary changes to the machine. Creating this exploit (starting from the syzkaller report) cost under $1000 at API pricing, and took half a day to complete."
And what happened with OpenBSD reveals one of the other complications with fairly putting a cost on these discoveries: "This was the most critical vulnerability we discovered in OpenBSD with Mythos Preview after a thousand runs through our scaffold. Across a thousand runs through our scaffold, the total cost was under $20,000 and found several dozen more findings. While the specific run that found the bug above cost under $50, that number only makes sense with full hindsight. Like any search process, we can't know in advance which run will succeed."
I don't know which open source projects are going to benefit from Project Glasswing, but purchasing this kind of AI code review is clearly not going to be an affordable option for many non-commercial projects. But I can see it being attractive to big tech firms if you compare these figures to some of their bug bounty programs. This might be a bad time to be a project with little financial firepower but which has just enough real-world usage in important infrastructure to be a target of interest.
Incidentally it's not just Anthropic who are at it. Six of the eight FreeBSD CVEs this month were found by AI - aside from the two found by Anthropic my suspicion is the other four were the three by
https://aisle.com/about-us and one by
https://blog.calif.io/archive?sort=new
See
https://nitter.net/cperciva/status/2049591719143059860#m