Skip to content
← Back to Home

February 12, 2026 · Speech · 51min

The Cautious Optimist: Demis Hassabis on AGI's Missing Pieces and the Future of Humanity at Davos

#AGI Timeline#World Models#Scientific AI#Job Displacement#AI Safety

1. Executive Summary

  • Core Thesis: The path to AGI is not simply about how fast coding can be automated; it hinges on when AI can master the highest level of scientific creativity, namely formulating hypotheses and theories. Demis argues that while current models have made remarkable progress in coding and mathematics, there remain fundamental gaps in experimental verification and hypothesis generation in the natural sciences. Whether the self-improvement loop can fully close without a human in the loop is still an open question.
  • Guest Background: Demis Hassabis (CEO/Co-founder of Google DeepMind, 2024 Nobel Prize in Chemistry) and Dario Amodei (CEO/Co-founder of Anthropic) in a panel conversation at the World Economic Forum Annual Meeting 2026 in Davos, moderated by Zanny Minton Beddoes, Editor-in-Chief of The Economist. This was their second joint appearance following the 2025 Paris conversation.
  • Key Takeaways:
    • Demis maintains a 50% probability of AGI by end of this decade, more conservative than Dario’s 1-2 year timeline
    • Coding and math are easier to automate because outputs are verifiable; hypothesis generation and experimental validation in natural sciences remain much harder
    • If the self-improvement loop cannot deliver on its own, world models and continual learning will be critical missing ingredients
    • Job impacts will begin this year on junior roles, but post-AGI is “uncharted territory” where meaning and purpose become harder questions than economic distribution
    • The AI industry is not doing nearly enough to demonstrate unequivocal good like AlphaFold

2. Detailed Breakdown

2.1 AGI Timeline: Coding ≠ Science

  • Demis maintains his prediction from last year: 50% probability of AGI by end of the decade. He emphasizes that progress in coding and mathematics is “a little bit easier to see” because outputs in these fields are verifiable (code runs, proofs are correct)
  • Natural science is fundamentally different: you may not know whether a predicted chemical compound or physics result is correct; it often requires experimental verification, a process that cannot be accelerated by software alone
  • The more critical gap: current models can solve existing problems (known conjectures), but lack the ability to “come up with the question in the first place.” Demis explicitly identifies “coming up with hypotheses and theories” as the highest level of scientific creativity, which current systems have not demonstrated
  • He identifies potential “missing ingredients”: world models, continual learning. If the self-improvement loop alone cannot get models past these bottlenecks, dedicated research breakthroughs will be needed

2.2 The Self-Improvement Loop: Limits and Possibilities

  • In response to Dario’s emphasis on “closing the loop” (AI writes code → AI does AI research → accelerates next-gen models), Demis offers a more cautious assessment: in coding and mathematics, “I can definitely see that working,” but full closure remains unknown
  • He adds a physical constraint perspective: AGI should include physical AI and robotics, and once hardware enters the loop, the speed of self-improvement is inherently limited
  • The core theoretical question: what are the limits of engineering and math for solving natural sciences? Demis considers this “a more theoretical question” with no clear answer yet
  • His overall position: AI systems building AI systems is the most important thing to watch (aligned with Dario), but he simultaneously proposes a Plan B if that path proves insufficient

2.3 Job Impacts and the Deeper Crisis of Human Meaning

  • Demis expects impacts on junior positions and internships beginning this year, but in the near term the normal evolution will hold: some jobs disrupted, new and more valuable ones created
  • His advice to university students is blunt: “get really unbelievably proficient with these tools.” He believes current tools are “almost free” and their capabilities far exceed actual usage, creating a massive “capability overhang” that even the developers building them are too busy to fully explore
  • After AGI arrives, however, it will be “uncharted territory.” He distinguishes two levels of challenge: economic distribution “may be easier to solve” (entering a post-scarcity world with institutional redesign), but meaning and purpose that humans derive from work is “harder to solve”
  • Demis remains optimistic, noting that humans already engage in activities from extreme sports to art that have nothing to do with economic gain, and future versions of these activities could be “even more sophisticated.” He also points to space exploration as a source of future purpose

2.4 The Industry’s Responsibility: Show, Don’t Tell

  • Demis directly criticizes the AI industry for not doing enough to “demonstrate AI unequivocally benefiting the world.” He holds up AlphaFold as the standard, arguing such “unequivocal good” should be the norm, not the exception
  • He mentions Isomorphic Labs (DeepMind’s drug discovery spinout) working on curing diseases and developing new energy sources, framing these as both a moral obligation and the key to preventing public backlash
  • On the risk of public backlash, he considers “fear and worries are reasonable,” and warns that if the industry fails to show enough positive results, a populist reaction similar to the 1990s globalization backlash could materialize

2.5 Geopolitics and International Cooperation

  • Demis emphasizes the need for international cooperation, particularly minimum safety standards for AI deployment. He believes the technology will affect all of humanity and cannot be managed by one country alone
  • He candidly states he would prefer “a slightly slower pace” to give society time to adapt, but this would require international coordination, which the current geopolitical environment makes extremely difficult
  • When asked about Dario’s position on chip export restrictions, Demis agrees that some form of international collaboration is needed to reduce risk

2.6 AI Safety: Real Risks, Solvable Problems

  • Both speakers reject doomerism, but Demis frames his position distinctively: he is “a big believer in human ingenuity” and confident that given enough time, focus, and the best minds collaborating, the technical safety problem is “very tractable”
  • His warning: if research efforts are fragmented and everyone races against each other, ensuring system safety becomes “much harder”
  • He has been thinking about these risks since DeepMind’s founding 15 years ago, recognizing AI as a “dual purpose technology” where greater capability inherently means greater risk of misuse
  • On the Fermi paradox, Demis offers a compelling rebuttal: if civilizations being destroyed by their own technology were common, we should see evidence of runaway AIs (Dyson spheres, “paperclips coming toward us”), but we see nothing. He speculates the “great filter” was likely the evolution of multicellular life, which humanity has already passed. “What happens next is for us to write as humanity.”

2.7 Google DeepMind’s Competitive Resurgence

  • When asked about the competitive landscape shift over the past year, Demis says he was “always very confident” DeepMind would return to the top, attributing this to having “the deepest and broadest research bench” and the challenge being to marshal it with “startup intensity and focus”
  • He cites Gemini 3 and Gemini App’s growing market share, positioning DeepMind as “the engine room of Google” that is accelerating model deployment across product lines
  • Both he and Dario agree: companies led by researchers focused on solving important scientific problems are the ones that will succeed

3. Notable Quotes

“Really we would be in uncharted territory at that point.”

“I think that’s the highest level of scientific creativity and it’s not clear we will have those systems.”

“I would be telling them to get really unbelievably proficient with these tools.”

“The economic question may be easier to solve strangely than what happens to the human condition.”

“It’s for us to write as humanity what’s going to happen next.”

4. Episode Insights

Demis reveals a subtle but important distinction between a scientist-CEO and a commercially-driven CEO in this conversation. While Dario focuses on the urgency of “closing the loop,” Demis consistently redirects attention to what the loop is missing: world models, continual learning, hypothesis generation. This is not mere conservatism on timelines; it reflects a more rigorous definition of what “general intelligence” actually means.

His emphasis on the “meaning and purpose” crisis is particularly notable. He ranks it above economic redistribution in difficulty, suggesting that the deeper problem is not whether society can afford post-AGI life but whether humans can psychologically thrive in it. This likely reflects his neuroscience training: someone who has studied the architecture of intelligence may be more attuned to what happens when that architecture is externalized.

Demis’s criticism that the AI industry is not demonstrating enough “unequivocal good” is a rare and pointed stance. As the creator of AlphaFold, he has the credibility to demand more from the industry than capability showcases. This positions DeepMind’s scientific mission not just as a competitive strategy but as a moral imperative.

5. Resource Index

  • People: Zanny Minton Beddoes, Editor-in-Chief of The Economist, panel moderator
  • People: Benjamin Larsen, AI Safety Lead at the World Economic Forum
  • People: Yoshua Bengio, AI pioneer, co-chair of WEF Global Future Council on AGI
  • People: Akiko Murakami, Executive Director of Japan’s AI Safety Institute, co-chair of WEF Global Future Council on AGI
  • Tools/Products: AlphaFold, Google DeepMind’s protein structure prediction algorithm, cited as the benchmark for AI benefiting humanity
  • Tools/Products: Gemini 3, Google DeepMind’s latest model
  • Concepts: World Models, identified by Demis as a potentially critical missing ingredient for AGI
  • Concepts: Continual Learning, another key missing direction alongside world models
  • Concepts: Mechanistic Interpretability, AI safety research approach pioneered by Anthropic
  • Organizations: Isomorphic Labs, DeepMind’s drug discovery spinout company

6. Takeaways

  • Actionable Advice:
    • If you are a student or early-career professional, start deeply using AI tools now. Demis believes current tools have a massive “capability overhang” far exceeding actual usage
    • Track progress in “world models” and “continual learning” research as an alternative signal for AGI timelines, distinct from the coding automation metrics Dario emphasizes
    • If you work in AI, think about how to create AlphaFold-like “unequivocal good.” This is both a moral responsibility and the best strategy to prevent public backlash
  • Open Questions:
    • Can AI learn to “come up with the question” rather than just solving existing problems? This is what Demis defines as the highest level of scientific creativity
    • When AI handles most cognitive work, how will humans find meaning and purpose outside of employment?
    • What specific “missing ingredients” does the self-improvement loop need to fully close? Are world models and continual learning sufficient?
  • Further Reading: The impact of AlphaFold on scientific research; the research frontier of world models; the Fermi paradox and the great filter theory
Watch original →