Experts Warn of 'Manhattan Trap' Risks in US-China Race to Artificial Superintelligence

Racing to develop artificial superintelligence could trigger a catastrophic chain of events that undermines America's national security instead of enhancing it, warn two prominent AI safety researchers who have introduced the concept of the "Manhattan Trap" to describe this dangerous paradox.
End of Miles reports that the three-pronged threat—involving great power conflict escalation, loss of control over autonomous systems, and the undermining of democratic institutions—creates a situation where even "winning" such a race could mean ultimate failure for the United States.
A Race That Could Trigger War
Corin Katzke from Convergence Analysis and Gideon Futerman, an Oxford University researcher, argue that China would rationally view a US artificial superintelligence (ASI) project as an existential threat, potentially provoking military action before completion.
"If China's military planners believed the US was on track to develop ASI, they would face a stark choice: accept permanent subordination to US power, or take military action to prevent US ASI development." Katzke and Futerman
The researchers point out that such action wouldn't necessarily mean full-scale nuclear war. Instead, Chinese strategists might calculate that limited conventional attacks or even tactical nuclear strikes on US AI infrastructure could disrupt ASI development while staying below the threshold for full strategic retaliation.
The Control Problem Intensifies Under Racing Conditions
Unlike nuclear weapons, which require human operators, ASI would act autonomously, creating unprecedented risks of losing control over systems more powerful than national militaries.
"The accelerated pace of development implied by an ASI race makes the problem of control even more severe. For ASI to provide decisive military advantage, it would need to not only surpass current military capabilities but also outpace other frontier AI systems being concurrently developed." The Oxford researcher
Such rapid capability improvements, possibly through self-improvement mechanisms, would make developing reliable control methods nearly impossible. The competitive pressures would further incentivize cutting corners on safety measures, while the secrecy requirements would prevent the open research collaboration needed to solve difficult technical problems.
Democracy's Self-Destruction
Even if the United States somehow avoided both conflict and loss of control, Futerman and his colleague argue that success might still mean failure through power concentration incompatible with democratic governance.
"A single corrupt official, a successful coup, or even a sophisticated hack could transform the United States from a democracy into a techno-autocracy more absolute than any in history." The AI safety experts
The small group controlling such systems would wield unprecedented power not just internationally but domestically, creating a decisive advantage over every other institution in American society, including the military, intelligence agencies, and law enforcement.
A Trust Dilemma, Not an Inevitable Race
Contrary to dominant narratives, the analysis from Convergence suggests the US and China face what game theorists call a "trust dilemma" rather than being trapped in an inevitable race. Unlike a prisoner's dilemma, both sides would actually prefer mutual restraint and would only race if they believed others were racing.
The AI researchers propose that verification mechanisms similar to those used for nuclear non-proliferation could help maintain cooperation, arguing that the extreme dangers of racing actually make cooperation more robust. They advocate for a bilateral dialogue between the US and China on ASI development, including government officials and technical experts who can develop specific verification mechanisms.
"Rather than rushing toward catastrophe in a misguided belief that racing is inevitable, the US and China should recognize their shared interest in avoiding an ASI race." Katzke and Futerman