AI doomers sell a worldview, not just arguments

Source: vox.com

TL;DR

The story at a glance

Sigal Samuel examines competing AI futures through Yudkowsky and Soares's new book, which sees superintelligence as an inevitable extinction event, versus views of AI as controllable technology. Yudkowsky, rationalist founder, and Soares of Machine Intelligence Research Institute push nonproliferation treaties. The article responds to the book's September 2025 release, amid rising AI safety debates.[[1]](https://www.vox.com/future-perfect/461680/if-anyone-builds-it-yudkowsky-soares-ai-risk)

Key points

Details and context

Yudkowsky shifted from accelerating AI for galactic colonization to doomerism after realizing alignment—steering AI to human values—is unsolved and distant. He sees AI "wanting" as persistent goal pursuit from training, leading to scenarios like drugging humans for delight or wiping them out.

The optical illusion analogy frames worldviews: doomers see duck (doom), normalists rabbit (adaptable tech), shaped by values like Yudkowsky's cosmic survival focus over short-term costs.

Normalists argue humans decide AI's path via policy; doomers say capabilities override intent. Accumulative risks build resilience needs, balancing AI benefits with oversight.

Key quotes

Why it matters

AI worldviews shape policy from treaties to regulations, deciding if we race ahead or pause for safety. Readers face choices on supporting AI firms, careers in safety, or advocacy amid hype. Watch enforcement of any nonproliferation pacts or signs of compounding harms like AI-driven inequality, though superintelligence timelines stay debated.