Skip to content
← Back to Home

January 23, 2026 · Podcast · 34min

Demis Hassabis: One or Two Breakthroughs Away from AGI

#AGI Definition#Continual Learning#World Models#Google#AI Bubble#Personal AI Assistant

Demis Hassabis does not think we have whooshed past AGI. He thinks we need one or two more genuine breakthroughs to get there. And he has a surprisingly specific list of what those breakthroughs are.

The Conversation

This is a Big Technology Podcast episode recorded at Davos, where Alex Kantrowitz sits down with the Google DeepMind CEO for a wide-ranging 34-minute conversation. What makes it interesting is the tension between Hassabis’s confidence that AI progress is on track and his insistence that the current paradigm is genuinely incomplete. He is simultaneously the most bullish person in the room about AI’s transformative potential and one of the most rigorous about what we haven’t achieved yet.

The Missing Pieces for AGI

Hassabis is direct about where current systems fall short. He identifies three specific gaps:

Continual learning. Today’s models have what the interviewer calls “goldfish brain”: they can search the internet, reason about what they find, but the moment you close the session, none of it changes the model. Hassabis considers this the defining limitation.

“Learning is synonymous with intelligence and always has been.”

When he says “general” in AGI, he means general learning, the ability to acquire new knowledge across any domain and retain it.

DeepMind has done this in narrow domains. Alpha Zero learned from scratch; AlphaGo Zero built on existing knowledge. The open question is whether those techniques can scale to the messy, open-ended real world. Hassabis is working on blending continual learning with large foundation models. Google recently released “personal intelligence” features as “first baby steps,” but he acknowledges the real goal, having the model itself change over time based on interaction, “has not been cracked yet.”

World models. This is where the Nano Banana answer makes sense. Asked which current system is closest to AGI, Hassabis surprisingly named Google’s image generator, then pivoted to the real point: video generation models like VO are proto-world models. A model that can generate 10-20 seconds of realistic physical scenes has implicitly learned something about how liquids flow, how objects interact, how causality works.

Why does this matter for AGI? Because world models enable long-term planning. Humans can plan across years (spend four years getting a degree for a better job in a decade). Current AI systems can only plan over one time scale. World models would allow robots to imagine many trajectories from their current situation, and eventually allow AI assistants to plan meaningfully in the real world.

Better memory and reasoning. Not just longer context windows, but more efficient ones. “Don’t store everything, just store the important things. That’s what the brain does.”

On whether scaling alone will solve these problems, Hassabis leans toward needing genuine new innovations: “If you were to push me, I would be in the latter camp.” But he’s clear that large foundation models will be a key component of the final AGI system regardless. The only debate in his mind is whether they are the component or a component. He disagrees with Yann LeCun’s view that LLMs are a dead end.

What AGI Actually Means

Hassabis pushes back hard on Sam Altman’s framing that “AGI is underdefined” and that we should just agree we’ve “whooshed by” it on the way to superintelligence.

“I’m sure he does wish that, but it’s absolutely not. I don’t think AGI should be sort of turned into a marketing term or for commercial gain.”

His definition is precise and demanding: a system that can exhibit all the cognitive capabilities humans can. Not just solving a math conjecture, but coming up with a breakthrough conjecture. Not just solving a physics problem, but inventing a new theory of physics, as Einstein did with general relativity. Not just creating a pastiche of existing art, but being Picasso or Mozart, inventing an entirely new genre.

“Today’s systems in my opinion are nowhere near that. Doesn’t matter how many Erdos problems you solve.”

He extends this to physical intelligence too: elite sports, body control, robotics. An AGI system would need all of these capabilities. His timeline: 5 to 10 years.

On superintelligence, he draws a clear line. Individual humans can come up with new theories. That’s not superhuman. Superintelligence would mean things genuinely beyond human capability, like thinking in 14 dimensions or plugging weather satellites directly into a brain.

Google’s AI Glasses Bet

Hassabis reveals he is personally working on smart glasses, which he considers one of the most exciting projects at Google. The reasoning:

The “Thinking Game” documentary showed DeepMind staff holding up phones to ask AI about the real world. It works, but the form factor is wrong. Cooking, navigating a city, helping the visually impaired, these all need hands-free interaction.

Google Glass failed before because of two things: the hardware was too chunky (now “more or less solved”) and there was no killer app. Hassabis believes the killer app is a universal digital assistant, and Gemini 3 is “maybe powerful enough to make that a reality.”

Partnerships with Warby Parker, Gentle Monster, and Samsung are in place. Prototypes exist. Hassabis says “you should start seeing that maybe by the summer.”

Ads in Gemini: A Trust Problem

On the question of whether Gemini will include advertising, Hassabis is direct: “We have no current plans.”

But his reasoning is more interesting than the answer. He frames it as a trust problem. If an assistant is supposed to work on your behalf, with your best interests, advertising creates an inherent conflict. He notes the irony of competitors claiming AGI is imminent while simultaneously building ad-supported chatbots:

“Why would you bother with ads then? So that is I think a reasonable question to ask.”

He acknowledges Google is brainstorming alternative revenue models, particularly for glasses and devices, but says no strong conclusions have been reached.

Competition and the Vibe Coding Wave

On Anthropic and Claude Code specifically, Hassabis is gracious: “Kudos to Anthropic. They built a very good model there.” He positions it as a focus trade-off: Anthropic focuses exclusively on coding and language models, while DeepMind builds image models, multimodal models, world models. “They just do coding and language models, and they’re very, very good at that.”

He’s personally enthusiastic about vibe coding, having used Gemini 3 over Christmas to prototype games. He sees it opening up productivity to designers, creatives, and artists who previously needed access to programming teams.

Google’s own IDE, Anti-Gravity, “can’t actually serve all the demand” it’s seeing.

The AI Bubble: Not Binary

Hassabis gives the most nuanced answer on the bubble question: “It’s not binary. Parts of the AI industry probably are [in a bubble], and other parts, it remains to be seen.”

Frothy: seed rounds of tens of billions for companies with no product or research, “just some people coming together.” Not frothy: established companies with massive existing businesses where AI clearly improves productivity. Uncertain: monetization of AI-native products like chatbots and glasses.

There’s also what he calls a massive “capability overhang”: even the builders don’t fully know what current models can do. Product opportunities are far from exhausted. AI inbox, agents in browsers, search powered by AI, these are just the beginning.

His strategic framing: “My job is to make sure that whatever happens with an AI bubble, if it bursts or if there isn’t one, we win either way.”

The Chess Analogy for Knowledge Work

On whether AI will demoralize knowledge workers the way it demoralized Go and chess players, Hassabis argues history says otherwise:

Chess computers have been superhuman since the 1990s. Chess is more popular than ever. Nobody watches computers play computers; they watch Magnus Carlsen.

The most striking example is from Go. The best Go player alive today, a South Korean in his mid-20s, was about 15 when the AlphaGo match happened. He learned natively with AlphaGo knowledge in the knowledge pool and is by far the strongest human player ever by ELO ratings. He “may actually be stronger than AlphaGo was back then.”

We still watch the 100 meters at the Olympics even though vehicles are far faster. “We have infinite capacity to adapt.”

But he acknowledges a deeper challenge: purpose and meaning. “We all get a lot of our purpose and meaning from the jobs we do.” When that’s automated, we’ll need “new great philosophers” to help navigate what he calls “a change to the human condition.” He compares it to the Industrial Revolution, “maybe 10x of that.”

Information as the Fundamental Unit

In a compressed two-minute segment, Hassabis shares his theory that information, not energy or matter, is the most fundamental unit of the universe.

Biological systems are information systems resisting entropy, retaining structure against randomness. This extends beyond biology: mountains, planets, asteroids have all been subject to selection pressures (not Darwinian, but external), and their long-term stability means their information is “kind of stable and meaningful.”

He connects this to AlphaFold: out of the nearly infinite possible protein structures, only certain ones are stable. Understanding that “information topology” is how you find the needle in the haystack. This is how he believes AI will eventually help discover new drugs, materials, and room-temperature superconductors.

The Alpha Zero Moment

The interview closes with the most provocative question: what happens when LLMs reach mastery of human knowledge, the way AlphaGo mastered human Go knowledge, and then you “let it loose” like Alpha Zero?

Hassabis: “That’s what to me would be the AGI moment.” The system would discover room-temperature superconductors that are possible within the laws of physics but we haven’t found, new energy sources, optimal batteries. First it reaches human-level knowledge, then techniques (which “maybe it will have to help invent”) allow it to explore uncharted territory, just as Alpha Zero did for Go.

AlphaFold was the proof of concept: viewed as a “root node problem,” it has now been used by 3 million researchers worldwide. Hassabis predicts “almost every single drug discovered from now on will probably have used AlphaFold at some point.”

Some Thoughts

  • Hassabis’s AGI definition is the most demanding in the industry. By requiring all human cognitive capabilities including creativity and physical intelligence, he sets a bar that makes 5-10 years feel ambitious rather than conservative.
  • The continual learning gap is underappreciated. Every other AI leader talks about scaling and reasoning. Hassabis is the one pointing out that a system that forgets everything between sessions is fundamentally not intelligent, no matter how well it reasons.
  • The ads answer reveals a strategic tension at Google. The company’s core business is advertising, but the CEO of its most important AI division is essentially arguing that ads are incompatible with the trust needed for a personal AI assistant.
  • The Go analogy is more hopeful than it first appears. If the best human Go player ever emerged because of AlphaGo, that’s a genuine argument for AI making humans better rather than obsolete.
  • His information-as-fundamental-unit theory connects his scientific worldview directly to DeepMind’s research strategy: find the information topology, and intractable problems become tractable. AlphaFold was the proof of concept; everything else follows.
Watch original →