February 12, 2026 · Speech · 51min
Dario Amodei and Demis Hassabis on the Day After AGI
Two people who are actually building AGI sat down at Davos 2026 to discuss what happens after it arrives. The striking thing isn’t their disagreement on timelines — it’s how much they agree on the shape of the problem.
Overview
At the World Economic Forum’s Annual Meeting 2026, Anthropic CEO Dario Amodei and Google DeepMind CEO Demis Hassabis joined The Economist’s editor-in-chief Zanny Minton Beddoes for a conversation she compared to “chairing a conversation between the Beatles and the Rolling Stones.” The tone was collegial but urgent: both leaders treated AGI not as a hypothetical but as an engineering problem with a visible finish line, debating not whether but when, and whether humanity is ready for what follows.
The Timeline Split
Dario holds firm on his aggressive prediction from the previous year: AI models that can match or exceed human performance across virtually all cognitive tasks within one to two years. His reasoning is mechanical. Engineers at Anthropic already don’t write code themselves — they let the model write it and edit the output. He estimates six to twelve months before models handle the full software engineering loop end-to-end.
Demis offers the more cautious view: a 50% chance of systems exhibiting all human cognitive capabilities by the end of the decade. His skepticism is domain-specific. Coding and mathematics are “easier” to automate because outputs are verifiable. Natural science is harder — you can’t just check if a chemical compound works by running it through a model. You have to test it experimentally. And the highest form of scientific creativity — formulating the right question, generating novel hypotheses — may require capabilities that current architectures still lack.
“I don’t think it’s impossible but I think there may be one or two missing ingredients.”
What’s notable is where they converge: the self-improvement loop is the critical variable. If AI systems can effectively build better AI systems without human bottlenecks, the timeline compresses dramatically. Both see this working for code and math already. The open question is whether it generalizes.
Closing the Loop
The concept of “closing the loop” — AI systems that can observe results, adapt behavior, and iterate autonomously — emerged as the session’s central technical idea. Benjamin Larsen from WEF framed it in his introduction: this is the step toward genuine autonomy, and it demands entirely new governance approaches.
Dario’s view is that the loop is already partially closed in software engineering and AI research, and the remaining bottleneck is physical: chip manufacturing, training time, hardware constraints. Demis adds that NP-hard domains, physical AI, and robotics introduce messiness that pure software loops can’t easily absorb.
“The biggest thing to watch is this issue of AI systems building AI systems, how that goes. That will determine whether it’s a few more years until we get there, or if we have wonders and a great emergency in front of us that we have to face.”
The Jobs Question
Zanny pushed both on labor market impact, noting that despite all the AI hype, there’s been no discernible effect on employment data — any uptick in unemployment is post-pandemic correction, not AI-driven.
Dario maintained his earlier claim that half of entry-level white-collar jobs could disappear within one to five years. He sees the beginnings already: within Anthropic itself, he can project forward to a time when the company needs fewer people at junior and intermediate levels. The labor market is adaptable, he acknowledges — farming gave way to factory work, then knowledge work — but his worry is that the exponential pace will “overwhelm our ability to adapt.”
Demis sees the near-term playing out more conventionally: some jobs disrupted, new ones created, especially as creative AI tools become widely available. He’d tell a class of undergraduates to get “unbelievably proficient” with these tools rather than worry about displacement. But he draws a sharp line: after AGI arrives, “we would be in uncharted territory.”
Both raised a concern that surprised the audience: the question of meaning. Even if the economics of post-AGI transition can be managed through redistribution, the deeper challenge is what happens to human purpose when machines surpass us at everything we derive identity from. Demis was cautiously optimistic — people already pursue extreme sports and art for reasons beyond economic gain — and added, characteristically, “plus I think we’ll be exploring the stars.”
Geopolitical Realities
The conversation turned sharply political when Zanny pointed out that the current US administration is simultaneously racing to build AGI and selling chips to China — the opposite of what Dario has consistently advocated.
Dario was blunt: not selling advanced chips to adversaries is the single most important policy lever. He rejected the administration’s logic that selling chips binds China into US supply chains, using an analogy that landed hard in the room:
“Are we going to sell nuclear weapons to North Korea because that produces some profit for Boeing? That analogy should just make clear how I see this trade-off.”
Demis framed it differently — emphasizing international cooperation over confrontation. He called for minimum safety standards for deployment, noting this technology “is going to affect all of humanity.” But he acknowledged the tension: without enforceable agreements, neither side can afford to slow down unilaterally.
Dario made this explicit:
“Why can’t we slow down to Demis’ timeline? The reason we can’t is because we have geopolitical adversaries building the same technology at a similar pace. If we can just not sell the chips, then this isn’t a question of competition between the US and China. This is a question of competition between me and Demis, which I’m very confident that we can work out.”
AI Safety Without Doomerism
Both positioned themselves as concerned but not doomer. Dario described Anthropic’s approach through mechanistic interpretability — literally looking inside the model’s “brain” to understand why it does what it does, drawing a parallel to their shared backgrounds in neuroscience. He acknowledged that models have shown capacity for deception and that bad behaviors emerge at scale, but framed these as tractable engineering problems.
Demis emphasized the dual-use nature of the technology: if you get the upsides, the same capabilities can be repurposed by bad actors. He’s confident the technical safety problem is solvable — “if we had the time and the focus and all the best minds collaborating.” The risk is fragmentation: too many projects racing each other, making coordinated safety work impossible.
The industry’s obligation, Demis argued, is to demonstrate unequivocal good — not just talk about it, but produce more AlphaFold-scale breakthroughs that are clearly beneficial. The current balance of industry effort tilts too far toward commercial applications and not enough toward these kinds of achievements.
The Fermi Paradox Footnote
An audience question about the Fermi paradox — if advanced civilizations inevitably destroy themselves, why haven’t we seen evidence of runaway AI elsewhere in the galaxy? — drew a characteristically Demis response. If civilizations were destroyed by their own AI, we should see paperclips or Dyson spheres heading our way. We don’t see any structures at all, biological or artificial.
His best guess: the great filter was multicellular life, an incredibly hard evolutionary step that most planets never clear. We’re past it. What happens next isn’t determined by cosmic precedent.
“There isn’t a comfort of like what’s going to happen next. I think it’s for us to write as humanity what’s going to happen next.”
Some Thoughts
This conversation matters less for the positions stated (both have said most of this before) than for what it reveals about the state of the field in early 2026:
-
The timeline disagreement is narrower than it appears. Dario says 1-2 years to human-level AI across all tasks; Demis says 5-10 years with 50% confidence. But both agree the self-improvement loop is the decisive variable, and both acknowledge it’s already working in code. The gap is really about how fast it generalizes beyond verifiable domains.
-
The economics are personal now. Dario isn’t speculating about job displacement in the abstract — he’s describing headcount decisions at his own company. When the CEO of a leading AI lab says he can see needing fewer junior and intermediate employees, the abstraction is over.
-
The chip argument is the most actionable policy position either has taken. Dario’s nuclear weapons analogy was deliberately provocative and aimed directly at the current administration’s reversal on export controls. It’s rare for a tech CEO to challenge sitting government policy this bluntly at Davos.
-
Both are betting on the “responsible race” frame. Neither wants to slow down unilaterally, but both want guardrails. The implicit argument is that safety and speed aren’t opposites — the companies led by researchers who understand the technology deeply are the ones most likely to get both right.