Schmidt and Co-Authors Advocate Cyber Sabotage to Deter Rival AI Projects

Former Google CEO Eric Schmidt and his co-authors are advocating for a radical new approach to AI governance: nations should develop capabilities to sabotage rival countries' advanced AI systems to prevent any single power from achieving dominance. This cyber deterrence strategy, which they've dubbed "Mutual Assured AI Malfunction" (MAIM), parallels nuclear-era deterrence and would rely on threats of covert cyberattacks—and potentially even physical strikes—against AI facilities.
End of Miles reports this provocative proposal appears in a newly published academic paper titled "Superintelligence Strategy: Expert Version," co-authored by Schmidt, AI researcher Dan Hendrycks, and Scale AI founder Alexandr Wang.
The Cold War Playbook for AI
According to the paper, nations will soon find themselves in a situation where they cannot allow rivals to gain an overwhelming advantage in AI. The technology strategists argue that just as nuclear powers maintained a balance through mutual vulnerability, AI powers would need similar leverage.
"If a rival state races toward a strategic monopoly, states will not sit by quietly. If the rival state loses control, survival is threatened; alternatively, if the rival state retains control and the AI is powerful, survival is threatened," the authors write. "Rather than wait for a rival to weaponize a superintelligence against them, states will act to disable threatening AI projects." From "Superintelligence Strategy: Expert Version"
The tech industry veterans explicitly outline how nations could disable competing AI projects through various tactics, beginning with espionage and escalating to more aggressive measures. Their framework details an arsenal of options including "covert sabotage" where insiders could "tamper with model weights or training data," hackers could "degrade the training process," and even "overt cyberattacks targeting datacenter chip-cooling systems."
Beyond Hacking: Physical Strikes on the Table
Perhaps most controversially, Schmidt and his colleagues suggest that if cyber methods fail, countries might resort to physical force. The paper states that nations may contemplate "kinetic attacks on datacenters," arguing that allowing one actor to dominate or destroy the world through uncontrolled AI development represents "graver dangers."
"States intent on blocking an AI-enabled strategic monopoly can employ an array of tactics... Should these measures falter, some leaders may contemplate kinetic attacks on datacenters, arguing that allowing one actor to risk dominating or destroying the world are graver dangers." From Section 4.1 of the paper
The researchers contend this deterrence dynamic already exists by default, noting that "above-ground datacenters cannot currently be defended from hypersonic missiles" and that building underground facilities would be prohibitively expensive and time-consuming.
Making Deterrence Stable
To maintain stability in this new era of technological competition, the AI policy experts recommend several measures reminiscent of Cold War nuclear safeguards. These include establishing clear "escalation ladders" between nations, building datacenters in remote locations to avoid civilian casualties, and potentially developing AI-assisted verification systems similar to how nuclear arms treaties were monitored.
The AI governance proposal represents part of a broader strategy framework that includes nonproliferation measures to prevent terrorists from accessing advanced AI and competitiveness strategies to strengthen domestic capabilities.
The technology policy specialists argue their approach would create a steady equilibrium that prevents both catastrophic AI accidents and dangerous unilateral advances. Rather than viewing their proposal as aggressive, they present it as pragmatic—suggesting that acknowledging and formalizing the mutual vulnerability that already exists could prevent hasty, destructive escalation.