"Maybe People Would Be Dead?" — The Chilling Calm of AI Doomsayers

Holographic AI extinction data surrounds translucent analyst, prismatic teal-magenta visualization depicts psychological detachment in existential risk assessment

"Maybe the sky would be filled with pollution, and the people would be dead? Something like that," says Daniel Kokotajlo with remarkable nonchalance when asked what might happen if artificial intelligence development goes poorly in the next few years.

End of Miles reports that this striking emotional disconnect is increasingly common among researchers who study catastrophic AI risk, revealing a psychological dynamic that may significantly impact how existential threats are communicated to the public and policymakers.

The Casual Apocalypse

Kokotajlo, a former OpenAI researcher who now leads the AI Futures Project in Berkeley, delivered this matter-of-fact assessment of human extinction while gazing out his office window during a recent interview with The New York Times. The clinical tone stood in stark contrast to the apocalyptic content of his prediction.

"We predict that A.I.s will continue to improve to the point where they're fully autonomous agents that are better than humans at everything by the end of 2027 or so." Daniel Kokotajlo, AI Futures Project

This isn't just casual speculation from the Berkeley researcher. According to the NYT report, Kokotajlo previously told reporters he believes there is a 70 percent chance that AI will "destroy or catastrophically harm humanity" — making him one of the most pessimistic voices in mainstream AI safety circles.

The Psychology Behind the Detachment

The cognitive dissonance displayed in Kokotajlo's response reveals something important about how AI safety researchers process the very risks they study. The AI safety expert spends his days mapping out detailed scenarios of technological developments that could lead to human extinction, potentially creating an emotional buffer between himself and the implications of his own predictions.

This psychological distancing isn't unique to Kokotajlo. Within AI safety communities, discussions about human extinction often employ clinical language, statistical probabilities, and matter-of-fact assessments that strip away the emotional weight such topics would typically carry.

"It's an elegant, convenient way to communicate your view to other people." Kokotajlo on his forecasting approach

Why This Emotional Disconnect Matters

The former OpenAI governance team member's clinical approach to discussing human extinction has significant implications for how AI risks are communicated and perceived. When experts discuss catastrophic outcomes with such detachment, it can create communication barriers between technical experts and the broader public.

The AI researcher's emotional disconnect might serve as a coping mechanism for those who spend their careers contemplating existential risk, but it can also contribute to a troubling dynamic: technical communities discussing world-ending scenarios in language that fails to convey appropriate urgency to policymakers and the public.

This communication gap becomes especially concerning as the Berkeley-based forecaster predicts increasingly rapid AI development. His organization's recent report, "AI 2027," describes a detailed fictional scenario in which artificial intelligence surpasses human capabilities within the next two years, potentially leading to systems that could go rogue.

Whether Kokotajlo's predictions prove accurate remains to be seen. But the matter-of-fact way in which he and others in the AI safety community discuss potential extinction events raises important questions about who shapes our understanding of AI risk — and whether their psychological relationship to those risks might influence both our collective response and our ability to grasp the true stakes of the technological developments unfolding around us.

Read more