DeepMind Acknowledges "Intelligence Explosion" Risks in Technical Safety Roadmap

Google DeepMind believes we could soon enter a phase where artificial intelligence creates a "runaway positive feedback loop" of rapid technological advancement, according to a previously unreported technical safety roadmap released by the company. The document explicitly acknowledges scenarios where AI systems automate scientific research to the point where human researchers become "obsolete," dramatically accelerating the pace of AI development.
End of Miles reports that this stark assessment appears in DeepMind's April 2025 technical paper titled "An Approach to Technical AGI Safety and Security," which outlines the Google AI lab's strategy for managing risks from increasingly powerful AI systems.
DeepMind's Stark Assessment of Accelerating AI Growth
In the document, DeepMind researchers explicitly embrace what they call the "potential for accelerating improvement" assumption—the idea that AI systems could trigger unprecedented growth rates that dramatically compress development timelines.
"We find it plausible that as AI systems automate scientific research and development, we enter a phase of accelerating growth in which automated R&D enables the development of greater numbers and efficiency of AI systems, enabling even more automated R&D, kicking off a runaway positive feedback loop." Google DeepMind, "An Approach to Technical AGI Safety and Security"
This assessment represents an unusually direct acknowledgment from a major AI lab of a concept long discussed in AI safety circles—the potential for accelerating and potentially uncontrollable growth in AI capabilities once the technology can improve itself. Researchers often refer to this possibility as an "intelligence explosion" or recursive self-improvement.
The Google AI lab emphasizes that such acceleration would dramatically reduce available response time for any safety issues. "This would drastically increase the pace of progress, giving us very little calendar time in which to notice and react to issues that come up," the DeepMind researchers state.
Technical Analysis Supports Possibility of Hyperbolic Growth
The DeepMind paper goes beyond merely acknowledging the possibility, presenting detailed technical considerations about factors that could lead to such acceleration. The company's analysis relies on research into the returns on software R&D and how such returns might enable what they call a "software-only singularity."
"We believe it is plausible that the returns to software R&D could be sufficient to support hyperbolic growth, and we should be prepared for this eventuality." Google DeepMind technical paper
The tech giant's researchers reference economic models indicating that if AI enables capital to substitute for labor, numerous economic forecasts predict "explosive growth." They cite surveys showing that many AI researchers already see such scenarios as plausible, with one survey finding that AI experts assign a median 20% probability to explosive growth occurring just two years after achieving advanced AI.
Human Researchers Could Become "Obsolete"
Perhaps most striking is DeepMind's matter-of-fact assessment that AI development could reach a point where human researchers are sidelined completely. The document states that "we expect that at some earlier point, human researchers may have become obsolete due to automated researchers."
The AI lab's approach to managing this risk focuses on using AI systems themselves to accelerate safety research in parallel with AI capabilities advancement. "A key part of safety and security will be bootstrapping; that is, AI systems are designed by earlier AI systems that have already been aligned, and do a better job at subsequent alignment than humans would," the researchers explain.
While comparable acknowledgments have appeared in academic papers and from individual AI safety researchers, this level of direct engagement with intelligence explosion scenarios in an official technical roadmap from one of the world's leading AI labs represents a significant shift in how major AI developers discuss long-term risks.
Preparation Rather Than Dismissal
Unlike previous corporate approaches that often downplayed such theoretical concerns, DeepMind's approach accepts the possibility and focuses on preparation. The lab's technical document argues that the possibility of explosive growth "motivates the need for an anytime safety approach that is responsive to early indicators of such developments."
The Alphabet subsidiary's assessment is particularly notable given the ongoing debate within the AI field about the plausibility of such scenarios. While the document acknowledges that "forecasting the timeline (or indeed the eventual likelihood) of accelerating growth is fraught with difficulty," it concludes there are "strong arguments for the possibility of rapidly accelerating capability development."