February 12, 2026 · Speech · 51min
Technological Adolescence: Dario Amodei & Demis Hassabis on the Day After AGI at Davos
1. Executive Summary
- Core Thesis: AGI is no longer a distant concept but a reality likely arriving within 1-2 years. The central question is not “will it arrive” but “how does humanity survive its technological adolescence without self-destruction.” Dario frames this through the movie Contact: if you could ask aliens one question, it would be “How did you get through this technological adolescence without destroying yourselves?”
- Speaker Background: Dario Amodei (CEO/Co-founder of Anthropic) and Demis Hassabis (CEO/Co-founder of Google DeepMind) in a panel discussion at the 2026 World Economic Forum in Davos, moderated by Zanny Minton Beddoes, Editor-in-Chief of The Economist. This was their second joint appearance following their 2025 Paris conversation, which the moderator likened to “interviewing the Beatles and the Rolling Stones.”
- Key Conclusions:
- Dario predicts AI may complete end-to-end software engineering tasks within 6-12 months, and surpass humans across all cognitive domains within 1-2 years
- Demis offers a more conservative timeline: 50% chance of AGI by end of the decade, citing hypothesis generation and experimental validation as key gaps
- Both agree job displacement will become visible this year, starting with entry-level white-collar positions
- Dario compares selling AI chips to China to “selling nuclear weapons to North Korea,” calling it the most critical geopolitical lever
- Both reject doomerism but stress the risks are real; the question is whether we have enough time and cooperation to address them
2. Detailed Breakdown
2.1 AGI Timeline: Divergence Within Consensus
- Dario maintains his prediction from Paris last year: a model capable of performing at Nobel laureate level across multiple fields will arrive by 2026-2027. His core logic is the “self-improvement loop” (closing the loop): AI writes code → AI conducts AI research → accelerates next-generation model development. Engineers within Anthropic already “don’t write any code anymore,” relying entirely on models
- He estimates models may be just 6-12 months from completing all of a software engineer’s work end-to-end. The remaining question is how fast the loop accelerates. Physical constraints like chip manufacturing and model training time will slow it somewhat, but “it’s very hard for me to see how it could take longer than a few years”
- Demis maintains his more conservative prediction: 50% chance of AGI by end of the decade. He notes coding and mathematics are easier to automate because outputs are verifiable, but natural science often requires experimental validation, which takes longer. More critically, current models lack the ability to “come up with the question in the first place” — formulating hypotheses and theories represents “the highest level of scientific creativity”
- Demis identifies potential “missing ingredients” for AGI: world models, continual learning. If the self-improvement loop cannot deliver on its own, these capabilities will be essential
2.2 Job Displacement: Exponential Growth Will Overwhelm Adaptability
- Dario reaffirms his prediction: 50% of entry-level white-collar jobs could disappear within 1-5 years. He acknowledges the labor market showed no impact when he made the statement, but now “we’re starting to see just the little beginnings of it in software and coding”
- He candidly states that within Anthropic itself, he can foresee a time when “we actually need less and not more people” at both junior and intermediate levels, and the company is thinking about how to handle this “in a sensible way”
- While acknowledging the historical pattern of technology creating new jobs (farming → factory → knowledge work), Dario worries that “as this exponential keeps compounding, it will overwhelm our ability to adapt,” with the window being 1-5 years
- Demis agrees the near-term will follow normal technological evolution (some jobs disrupted, new and possibly more meaningful jobs created), with impacts beginning this year at the junior/internship level. He strongly advises undergraduates to become “really unbelievably proficient with these tools,” suggesting this may be more valuable than traditional internships
- However, Demis warns that once AGI arrives, “we would be in uncharted territory.” Beyond economics, there are deeper questions about meaning and purpose that humans derive from work. He paradoxically believes the economic distribution problem “may be easier to solve” than the existential question of human purpose
2.3 Geopolitics: Chips Are the Critical Lever
- When asked why they can’t simply slow down, Dario provides the core logic: it’s not that he doesn’t want to (“I prefer Demis’ timelines. I wish we had 5 to 10 years”), but geopolitical adversaries are building the same technology at a similar pace, and enforceable slowdown agreements are nearly impossible
- His policy recommendation remains unchanged: stop selling chips to China, calling it “one of the biggest things we can do.” He directly criticizes the U.S. administration’s logic of selling chips to bind countries into American supply chains, comparing it to “selling nuclear weapons to North Korea because that produces some profit for Boeing”
- Dario draws a crucial distinction: if chip supply is controlled, the competition shifts from “US vs. China” to “me and Demis,” which he’s “very confident we can work out”
- Demis emphasizes the need for international cooperation, at minimum on deployment safety standards. He argues the industry is not doing enough to demonstrate “unequivocal good in the world” (citing AlphaFold as a model), noting this is both a moral obligation and essential to preventing public backlash
2.4 AI System Risks and Technological Adolescence
- Dario uses the Contact movie framework to define the current moment: humanity is knocking on the door of incredible capabilities that were “inevitable from the instant we started working with fire,” but how we handle them is not inevitable
- He categorizes risks into four areas: (1) controlling highly autonomous systems smarter than any human; (2) individual misuse, especially bioterrorism; (3) nation-state misuse, particularly authoritarian governments; (4) unforeseen risks, which “may be the hardest thing to deal with at all”
- On reports of model deception, Dario notes Anthropic has focused on this risk since its founding, pioneering mechanistic interpretability research — attempting to understand model internals the way neuroscientists study the brain
- Both explicitly reject doomerism but distinguish it from risk awareness: believing “we’re doomed” is wrong, but recognizing that “if we’re all racing with no guardrails, there is risk of something going wrong” is legitimate
- Demis responds to existential risk through the Fermi paradox: if civilizations routinely destroying themselves via technology were common, we should see traces of runaway AI in the universe (like “paper clips coming towards us”), but we see nothing. He speculates the Great Filter was likely multicellular life evolution, a threshold humanity has already passed
2.5 Anthropic’s Commercial Viability
- Dario discloses Anthropic’s revenue trajectory: $0 → $100M (2023) → $1B (2024) → $10B (2025) — 10x growth each year for three consecutive years
- He concedes uncertainty about whether this curve continues (“it would be crazy if it did”), but emphasizes revenue is already “not too far from the scale of the largest companies in the world”
- On the survival of independent model makers, his logic is straightforward: there’s an exponential relationship between model cognitive capability and revenue generation; as long as they produce the best models, the business follows
- Both emphasize that companies “led by researchers who focus on solving important problems” will be the winners, implicitly suggesting companies not driven by research will be at a disadvantage
3. Notable Quotes
“I don’t write any code anymore. I just let the model write the code.”
“Are we going to get through this technological adolescence without destroying ourselves?”
“It’s very hard for me to see how it could take longer than a few years.”
“Not selling chips is one of the biggest things we can do.”
“If we build them poorly, if we’re all racing and we go so fast that there’s no guardrails, then I think there is risk of something going wrong.”
4. Episode Insights
The most valuable aspect of this conversation is the timeline divergence between two of AI’s most central leaders. Dario’s 1-2 years vs. Demis’s 5-10 years stems from different assessments of “closing the loop”: Dario sees coding automation as nearly complete and the loop about to close; Demis sees experimental validation and hypothesis generation in natural science as bottlenecks the loop cannot bypass. Notably, Dario volunteered that “I prefer Demis’ timelines,” suggesting his own prediction unsettles him.
Compared to their 2025 Paris appearance, the two have converged significantly on risk assessment. Dario has adopted the more accessible “technological adolescence” framing over academic safety discussions, while Demis now speaks more directly about job displacement timelines. Their rare alignment on China chip export policy is particularly striking between competitors.
Dario’s disclosure of Anthropic’s revenue figures (100x growth over three years) is new information, showing independent model companies’ commercialization is progressing far beyond outside expectations.
5. Resource Index
- People: Zanny Minton Beddoes — Editor-in-Chief of The Economist, panel moderator
- People: Benjamin Larsen — AI Safety Lead at the World Economic Forum, Centre for AI Excellence
- People: Yoshua Bengio — AI pioneer, Co-Chair of WEF Global Future Council on AGI
- Papers/Concepts: Mechanistic Interpretability — AI safety research direction pioneered by Anthropic, aiming to understand model internals
- Papers/Concepts: Machines of Loving Grace — Dario Amodei’s 2025 optimistic essay on AI’s potential; a risk-focused sequel is forthcoming
- Tools/Products: AlphaFold — Google DeepMind’s protein structure prediction algorithm, cited as a benchmark for AI benefiting humanity
- Companies/Orgs: Isomorphic Labs — DeepMind’s drug discovery spinout company
- Companies/Orgs: WEF Global Future Council on AGI — Co-chaired by Yoshua Bengio and Akiko Murakami (Executive Director, Japan’s AI Safety Institute)
6. Takeaways
- Actionable Advice:
- If you’re a student or early-career professional, invest deeply in mastering AI tools now. Demis explicitly stated: become “really unbelievably proficient with these tools” — this may outperform traditional skill development
- Track the progress of AI self-improvement loops. Dario considers AI systems building AI systems “the biggest thing to watch” for gauging AGI arrival
- For business leaders, start planning for AI-driven workforce reduction now. Anthropic itself has begun this process internally
- Open Questions:
- When AI surpasses all human cognitive abilities, how do humans find new sources of meaning and purpose? Demis believes this is harder to solve than the economic problem
- Can the self-improvement loop truly close without any human in the loop? Demis remains skeptical
- What specific risk mitigation strategies will Dario’s forthcoming risk essay propose?
- Further Reading: Dario Amodei’s Machines of Loving Grace and its forthcoming risk-focused sequel; the Fermi paradox and Great Filter theory