The real AI threat: Kaplan says it's the ecosystem, not a singleton

"The ecosystem itself could have a lot of problems," warns Jared Kaplan, co-founder and chief scientist at Anthropic, discussing how emergent behaviors from interacting AI systems may pose greater risks than any single superintelligent entity.
End of Miles reports that this perspective represents a significant shift from traditional AI risk frameworks that have historically focused on singleton scenarios — where a single superintelligent system gains overwhelming power.
Moving Beyond the Singleton Model
The Anthropic scientist explains that concerns about AI risks have often been framed through the lens of a solitary all-powerful AI — a concept popularized in Nick Bostrom's influential work. However, Kaplan argues reality will likely manifest differently:
"I completely agree that super intelligence is not going to be one specific moment where we hit super intelligence. It is very much a continuum. AI is evolving in that way, and we're going to be using AI systems that are not very sophisticated — basically tools — forever." Jared Kaplan, Anthropic
Rather than facing a single entity, the computational physicist suggests we're approaching a world of "thousands or millions" of AI systems operating at various capability levels across the economy, creating fundamentally different control problems than those typically discussed in alignment research.
Unpredictable Interactions and System-Level Failures
The Stanford-trained scientist outlined a scenario where even perfectly aligned individual AI systems could collectively produce harmful outcomes through their interactions:
"This ecosystem could have a lot of problems that maybe Claude is aligned with, but the ecosystem has problems, and that's something that is so new that no one's really studied it, but it's something to worry about." Kaplan
He highlighted how future automated systems might interact in increasingly complex ways — like AI doctors negotiating with AI pharmacists and insurance systems — creating cascading effects no single entity fully comprehends.
The Comprehension Gap
The theoretical physicist turned AI researcher expressed particular concern about our diminishing understanding of increasingly complex technological systems:
"There's this general worry that the way that AI development will cause harm eventually is that things kind of go off the rails. In the modern world, I don't understand how most things work. Like, do I understand how my car works? Do I understand how my iPad works? So when you get more and more systems that no one really understands, things can kind of go off the rails in ways that are really hard to predict due to the interaction of the ecosystem." Jared Kaplan
This comprehension gap becomes particularly dangerous in distributed intelligence environments where the interactions between systems can produce behaviors no human anticipated during their design.
Rethinking Governance Frameworks
Kaplan's perspective suggests current AI governance approaches may be focused on the wrong problem — prioritizing control of single powerful systems when the greater challenge involves managing complex interactions between numerous AI agents with varying capabilities.
The AI researcher emphasized that traditional AI safety research must expand beyond single-agent alignment to encompass multi-agent dynamics and systemic risk analysis if we hope to navigate the coming era of pervasive artificial intelligence safely.