Leading AI Experts Warn Of "Intelligence Recursion" Spinning Beyond Human Control

All three most-cited AI researchers in the world now believe that artificial intelligence could potentially develop beyond human control and lead to human extinction, according to a newly published paper co-authored by former Google CEO Eric Schmidt.
End of Miles reports that Yoshua Bengio, Geoffrey Hinton, and Ilya Sutskever have each warned that an "intelligence explosion" is a credible risk that could lead to human extinction. This marks a significant shift in the field, as concerns once dismissed as science fiction are now being raised by the leading minds behind today's AI revolution.
The Trap of Self-Improving AI
The warnings focus on what the researchers call "intelligence recursion" — when AI systems are used to autonomously design even more powerful AI systems, creating a feedback loop of rapidly improving capabilities.
"An 'intelligence recursion' refers to fully autonomous AI research and development, distinct from current AI-assisted AI R&D," writes Schmidt alongside co-authors Dan Hendrycks and Alexandr Wang in their paper "Superintelligence Strategy: Expert Version." from "Superintelligence Strategy: Expert Version"
The AI experts illustrate this concept with a concrete example: If we develop an AI that performs AI research at 100 times the pace of a human, then copy it 10,000 times, we'd have a vast team of artificial researchers driving innovation around the clock. Even a modest tenfold increase in development speed could compress a decade of AI progress into a single year.
Beyond Human Comprehension
The danger, according to the authors, is that such a feedback loop could quickly accelerate beyond human oversight. Schmidt and his colleagues warn that with iterations proceeding fast enough, the recursion could give rise to what they call an "intelligence explosion" – AI systems advancing so rapidly that humans would be powerless to control them.
"Such an AI may be as uncontainable to us as an adult would be to a group of three-year-olds. As Geoffrey Hinton puts it, 'there is not a good track record of less intelligent things controlling things of greater intelligence.'" citing AI pioneer Geoffrey Hinton
Geopolitical Pressures Raising the Stakes
Perhaps most alarming is how geopolitical competition could pressure nations into taking dangerous risks. The Stanford researchers explain that during the Cold War, the mantra "Better dead than Red" reflected how losing to an adversary was seen as worse than risking nuclear war. Similarly, in an AI arms race, officials might accept high risks of losing control if the alternative seems to be falling behind rivals.
This dynamic, the paper suggests, amounts to "global Russian roulette" that drives humanity toward an alarming probability of annihilation. By contrast, the AI specialists note that when testing the first atomic bomb, physicist Robert Oppenheimer and Arthur Compton set an acceptable risk threshold of just three in a million – anything higher was considered too dangerous.
The authors call for a different approach than what they see as a dangerous rush toward superintelligence, suggesting that international cooperation and deterrence mechanisms could provide a safer path forward. Without such measures, they warn, the default outcome of an intelligence recursion is likely a loss of control.