AI hackers shake up cyber-security

Source: economist.com

TL;DR

The story at a glance

Anthropic announced on April 7th, 2026, that its new AI model Mythos would not be released publicly because it excels at finding and exploiting security holes in software, surpassing most humans. The firm is restricting access via Project Glasswing to 12 founders including Apple, Google, and Nvidia, plus 40 more infrastructure firms. This follows recent advances in AI bug detection, prompted by worries over risks to digital systems.

Key points

Details and context

Zero-day bugs, unknown before discovery, hide in much software because exhaustive human checks are impossible. Jeff Williams of Contrast Security notes they lurk everywhere; Mythos proves novel by finding new ones, not just training data repeats.

AI bug reports were once full of false positives or trivia, but Bruce Schneier observes a recent shift to good quality. Still, the race matters: can fixes outpace exploits? Project Glasswing gives select firms early access to patch internet-critical code before broad AI proliferation.

Unmaintained code in routers, TVs, fridges, and machines poses risks; attackers could exploit freely. Researchers see long-term defender wins via pre-publish scans, but short-term chaos as capabilities spread.

Key quotes

Why it matters

AI models like Mythos expose flaws across operating systems, browsers, and crypto software, threatening e-commerce, finance, and infrastructure if misused. Businesses face higher patching urgency, while consumers and investors see risks in unpatched devices like routers or IoT gadgets. Watch Project Glasswing patches, AI exploit speed versus fixes, and public model releases with caution.