Doubts Shadow Sam Altman's AI Grip
Source: newyorker.com
TL;DR
- A New Yorker investigation uncovers documents and interviews alleging Sam Altman's pattern of deception at OpenAI, from safety lapses to business dealings.
- OpenAI's valuation nears trillions with Microsoft investing $13 billion and deals like $50 billion with Amazon for Pentagon AI integration.
- Doubts about Altman's integrity persist as he shapes AI infrastructure in autocracies and government contracts, raising risks for humanity's future.
The story at a glance
Ronan Farrow and Andrew Marantz profile OpenAI CEO Sam Altman, drawing on interviews with over 100 people and previously undisclosed memos from Ilya Sutskever alleging Altman's lies about safety and operations. The piece traces his 2023 ouster and reinstatement amid board distrust, his ties to figures like Elon Musk and foreign leaders, and OpenAI's shift from nonprofit safety focus to profit-driven power. It comes now as OpenAI eyes a trillion-dollar IPO and expands military and autocracy-linked projects.[[1]](https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted)
Key points
- Sutskever compiled 70 pages of evidence, including Slack messages, claiming Altman showed a "consistent pattern of... Lying" on safety protocols like deploying GPT-4 features without approval and dissolving the Superalignment team.[[1]](https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted)
- Altman secured funding from UAE and Saudi Arabia for projects like $500 billion Stargate AI infrastructure, despite Biden Administration blocking some over trustworthiness concerns.
- Ties to Trump include donations, meetings, and influencing policy; OpenAI now integrates into Pentagon deals, immigration, surveillance, and warfare after Biden's AI order was repealed.
- Safety pledges weakened: 20% compute for safety dropped to 1-2%; critics like former board members call Altman "unconstrained by truth" with a "sociopathic lack of concern for consequences."
- Early issues at Y Combinator and personal investments in 400 companies fueled mistrust; Aaron Swartz warned friends shortly before his death that "Sam can never be trusted."
- Altman defends his adaptability as pragmatic evolution, not deceit, and interviewed with authors, saying AI demands "heightened integrity" he feels daily.
Details and context
OpenAI started as a nonprofit in 2015 with Musk to prioritize safety over profit, but tensions grew as Altman pushed commercialization. His 2023 firing—"the Blip"—stemmed from board worries he lacked candor for AGI risks, like "dictatorship" scenarios; employee revolt and investor pressure reinstated him with a new board.
Altman's government lobbying mixed public support for regulation with private opposition; he explored security clearance but withdrew amid foreign ties scrutiny, compared to Jared Kushner. Military shifts followed Trump's win: OpenAI's tech used in Venezuela raids and Iran campaigns, reversing earlier bans.
Deals in autocracies like UAE's Neom (post-Khashoggi) and Saudi ties via Mohammed bin Salman highlight ethical trade-offs between speed and control. Critics like Sutskever and Dario Amodei (Anthropic founder) left over these priorities; allies see Altman as a visionary unifier.
Key quotes
“I don’t think Sam is the guy who should have his finger on the button.” — Ilya Sutskever, to a board member.[[1]](https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted)
“He’s unconstrained by truth... sociopathic lack of concern for the consequences.” — Former OpenAI board member.[[1]](https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted)
“Yes, it demands a heightened level of integrity, and I feel the weight of the responsibility every day.” — Sam Altman, in follow-up statement.[[1]](https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted)
Why it matters
Altman's decisions at OpenAI could steer AI toward utopian abundance or existential threats like unchecked military use or economic collapse from labor disruption. For readers and businesses, it means weighing reliance on OpenAI tools against risks of safety shortcuts and opaque governance in trillion-dollar valuations. Watch OpenAI's IPO, Trump administration AI policies, and board stability, though trust issues may linger without clearer accountability.