"We're Moving Much Faster Than Expected": Anthropic Scientist Revises AGI Timeline

"If anything I expect it probably sooner than 2030, probably more like in the next two to three years," says Anthropic's co-founder and chief scientist Jared Kaplan, radically accelerating his timeline for when we can expect human-level artificial intelligence to emerge.
This dramatic revision from one of AI's most influential technical leaders signals a profound acceleration in capability development that even insiders are struggling to fully comprehend, End of Miles reports.
The Rapidly Shrinking Timeline
Kaplan, whose company created the Claude family of AI assistants, had previously suggested human-level AI might arrive by 2030. His new assessment cuts that timeline by more than half, suggesting the industry is progressing far faster than publicly acknowledged.
When pressed on what exactly constitutes "human-level AI," the former theoretical physicist acknowledged the ambiguity while emphasizing two key metrics he monitors: the range of environments AI can operate in, and the complexity of tasks they can complete.
"I think of it on two axes. There's what environments can an AI actually go out and act in... and just how complex are the things that AI can do. Like can it do something that would take me a minute or 10 minutes or an hour or a day? I think we're just going to keep moving in that direction." Jared Kaplan, Anthropic co-founder
Measuring AI Progress in Human Time
The Anthropic scientist elaborated on how he conceptualizes AI capabilities in terms of "human time" – the length of tasks AI systems can now handle that previously required human attention. He notes the rapid progression from early language models that could only perform second-long tasks to current systems tackling projects that might take a skilled human hours.
"Claude 3.7 Sonnet can handle tasks that might take a graduate student half a day. This 'horizon' that Claude can operate on is definitely something I track. People years ago who were AI enthusiasts talked about maybe AI won't be able to do things that take longer and longer, but I think we are seeing this horizon expand." Kaplan
The progress has been driven by three key factors, according to Kaplan: increasing model intelligence allowing systems to track more issues simultaneously, expanding context length enabling comprehension of longer documents, and targeted reinforcement learning training on increasingly complex tasks.
Industry Implications of Accelerated Timelines
Kaplan's revised prediction carries significant weight given his role developing some of the most capable AI systems currently available. It also follows a pattern of AI researchers consistently underestimating the pace of progress in the field.
This timeline compression comes amid increasing competition between American AI labs and Chinese competitors like DeepSeek, which Kaplan acknowledges are "not very far behind" algorithmically despite export controls on advanced computing chips.
"There's so much low-hanging fruit to collect that it's unpredictable who's going to sort of find which advances first. My expectation though is that Western firms will probably have an advantage in terms of the amount of compute available." Kaplan on global AI competition
The accelerated timeline raises urgent questions about governance, safety measures, and economic impact. Kaplan highlights that unlike previous technological revolutions, AI will likely disrupt educated white-collar knowledge workers first – creating fundamentally different economic and social challenges than historical automation waves.
The Paradox of Building Superintelligence
Perhaps most telling is Kaplan's frank acknowledgment of the central paradox in his work: building systems that will soon surpass human intelligence while trying to ensure they remain aligned with human values and under human control.
"Is it really safe to have AI that is smarter than you? I think that is a real question. Should we be having these superintelligent AI aliens kind of invading the Earth, or should we decide not to?"
For now, Anthropic is focusing on developing what Kaplan calls "scalable supervision" – systems where AI can effectively monitor other AI systems, theoretically allowing humans to maintain control even as individual systems surpass human capabilities. Whether such approaches will prove effective remains one of the most consequential open questions of our time.