Mythos and the insecurity avalanche
Source: newstatesman.com
TL;DR
- Anthropic's Mythos AI model uncovers thousands of high-severity vulnerabilities in major operating systems and browsers, sparking hype about hacking risks.
- Britain's AI Security Institute found Mythos solves hacking tests in hours that take humans 20 hours, but real-world impact remains uncertain.
- Short-term, it accelerates an "insecurity avalanche" as criminals exploit flaws faster than companies patch them.
The story at a glance
Anthropic announced its new Mythos model, capable of finding thousands of vulnerabilities including a 27-year-old bug in OpenBSD, but held back public release for safety. The New Statesman article by Will Dunn critiques the company's apocalyptic warnings as marketing hype akin to OpenAI's past tactics, while highlighting real risks from rapid vulnerability discovery. This comes days after Anthropic's announcement and evaluation by Britain's AI Security Institute (AISI). AI tools now outpace human bug hunters, but patching lags due to costs.
Key points
- Mythos identified vulnerabilities in every major operating system and web browser; Anthropic is sharing it with 40+ companies via Project Glasswing to aid defenses.
- AISI tested Mythos in simulations, calling it a "step up" that completes complex hacks overnight, though it "cannot say for sure" about well-defended real systems.[[1]](https://www.newstatesman.com/science-tech/big-tech/2026/04/anthropic-mythos-insecurity-avalanche)
- AI finds flaws cheaper and faster than bug bounties, which pay tens of thousands; criminals shifted to vulnerability hunting using tools like WormGPT over the past 1-2 years.
- Known vulnerabilities often stay unpatched over a year because fixes cost money, like shutting factories; IT prioritizes laptops over broader systems.
- Exploit markets automated: access to hacked systems sells in 20 seconds, down from 8 hours in 2022; 75% of attacks now involve patient info-gathering, up from 25%.
- Long-term, AI like Mythos could secure new software better, but legacy systems risk an expert-described "avalanche" of exploits companies aren't ready for.
Details and context
Anthropic's warnings echo OpenAI's 2019 GPT-2 hype, where doomsday claims boosted image before release; Mythos follows suit despite initial holdback. Criminals once used generative AI for ineffective viruses but now hunt zero-days, creating the "fastest market for crime ever." Patching trade-offs hit hard: financial incentives delay fixes on industrial systems, leaving them exposed as AI speeds discovery.
AISI's controlled tests limit certainty—Mythos excels in labs but faces unknowns in defended networks. This fits rising AI-cyber trends, with Mythos exposing gaps faster than defenders close them.
Key quotes
- "We cannot say for sure... whether Mythos Preview would be able to attack well-defended systems." (AISI)[[1]](https://www.newstatesman.com/science-tech/big-tech/2026/04/anthropic-mythos-insecurity-avalanche)
- "Automation has created the fastest and most efficient market for crime that has ever existed." (Experts to author)
Why it matters
AI-driven vulnerability hunting exposes deep flaws in software foundations, risking widespread exploits amid slow patching. Companies face urgent pressure to prioritize security budgets, while users and investors should expect more breaches from unpatched legacy systems. Watch Anthropic's phased Mythos rollout and Project Glasswing results, though real-world defenses may limit the hype.