Plan B for AI Safety: What Happens If Government Can't Keep Up?

Holographic visualization of US AI governance gap: government structures dissolve as alternative oversight networks emerge amid technological urgency

The United States may need to develop AI governance systems that don't rely on federal government action if Washington continues to fall behind on adopting artificial intelligence technologies, according to a new report from Forethought Research released this week.

End of Miles reports the April 2025 study, titled "The AI Adoption Gap: Preparing the US Government for Advanced AI," outlines contingency plans for scenarios where government AI adoption remains dangerously slow, potentially creating existential risks as AI capabilities rapidly advance.

When government can't keep up

The report suggests if the US government never adequately ramps up AI adoption, it may become "unable to properly respond to existential challenges" posed by increasingly powerful AI systems. More concerning, this capability gap is already widening.

"If the US government is unable to keep up with society's adoption of AI, the results could be catastrophic," the report warns. "We might see devastating global pandemics... great power war and global instability... serious societal issues... [or] human disempowerment by advanced AI." Forethought Research

The research team recommends developing backup systems outside traditional government frameworks. These include building independent third-party AI auditors that could operate without US government support, exploring private AI governance mechanisms, or establishing AI governance institutions in countries like the Netherlands or the UK.

Private sector alternatives

Perhaps most striking is the research organization's suggestion to work directly with AI companies themselves on safety protocols, essentially shifting crucial governance functions to the private sector.

"Work directly with AI companies to develop and implement internal safety protocols and governance systems, and help companies coordinate on those priorities." Contingency planning section, Forethought Research report

According to the report author, this approach represents a significant departure from conventional thinking about AI governance, which typically assumes government will play the central regulatory role. The contingency planning reflects growing concern that federal agencies may simply be unable to keep pace with rapidly advancing AI capabilities.

Why this matters now

The study identifies a concerning trend in US government AI adoption, noting that private-sector job listings are four times more likely to be AI-related than public-sector listings, with the divide widening. While the Department of Defense dominates federal AI contracts (70-90%), civilian agencies lag significantly behind.

This disparity creates what Forethought describes as a "dual imperative" – government adoption of AI can't wait, but hasty adoption could backfire. The research offers a framework for balancing these competing concerns while acknowledging that current trends suggest slow government AI adoption may continue.

"These scenarios might render a lot of current risk mitigation work irrelevant, seem worryingly probable, and will get little advance attention by default — more preparation is warranted." The report

The policy analysis comes as major AI companies continue releasing increasingly capable systems while federal agencies struggle with outdated IT infrastructure, talent shortages, and complex procurement processes. The Forethought research suggests the window for establishing effective government oversight may be narrowing faster than previously understood.

Read more