Skip to content
← Back to Home

January 15, 2026 · Interview · 53min

Demis Hassabis: Google's AI Engine Room and the Race to AGI

#Google DeepMind#AGI Timeline#AI for Science#AI Bubble#US-China AI Race

The CEO of Google DeepMind believes we are entering the most transformative period in human history, and that his team is the engine making it happen. But what makes this interview distinctive is not the AGI optimism; it’s the rare candor about competitive positioning, the limits of China’s AI ambitions, and why Google’s boring financial strength might be the decisive advantage in a race that could bankrupt its rivals.

The Interview

CNBC’s debut episode of The Tech Download features Demis Hassabis in a sit-down interview with Arjun Kharpal and Steve Kovach in London. The conversation covers DeepMind’s research direction, Google’s internal reorganization around AI, the competitive landscape, China’s catch-up, and Hassabis’s vision for AI-driven scientific discovery. The hosts intercut the interview with their own analysis segments, adding market and geopolitical context.

”10x the Industrial Revolution, 10x Faster”

Hassabis’s framing of AGI’s timeline and impact is striking in its specificity. He places AGI at 5-10 years out and describes the coming transformation as “like the industrial revolution, but maybe 10 times bigger, 10 times faster.”

He distinguishes between what current AI can do and what’s missing. Today’s large language models excel at language and reasoning but lack understanding of the physical world. The next frontier is what he calls “world models,” systems that understand physics, can plan, and can interact with physical reality. This is the bridge to robotics and to AI that can operate autonomously in the real world.

“I think there it’s going to be like the industrial revolution, but maybe 10 times bigger, 10 times faster. So, it’s incredible amount of transformation, but also disruption that’s going to happen.”

On the risks, Hassabis identifies two specific concerns: bad actors repurposing general-purpose AI for harmful ends, and the challenge of keeping increasingly autonomous agent-based systems within intended guardrails. He expresses confidence in DeepMind’s safety work, noting they’ve been thinking about these problems since the lab’s founding in 2010, when “almost no one was working on AI.”

How AI Learns Physical Reality

The most technically substantive part of the interview addresses how AI systems might come to understand the physical world. Hassabis explains DeepMind’s approach through their video generation work (Veo) as a stepping stone. Training on millions of hours of video teaches the system implicit physics: objects fall, liquids flow, light behaves consistently.

But Hassabis is clear this is insufficient. Video generation captures surface-level patterns without deeper understanding. The goal is to develop genuine world models that can simulate and predict physical interactions at a level useful for robotics and scientific simulation. He sees this as the critical missing piece between today’s language-based AI and systems capable of operating in the real world.

He’s particularly excited about making world models efficient enough for practical planning: rather than just generating video, these models need to be fast enough to run inside a robot’s decision loop or a scientific simulation pipeline.

Google’s Balance Sheet as Superpower

Perhaps the most strategically revealing moment comes when Hassabis addresses the AI bubble question. His answer is disarmingly blunt: Google can afford to be wrong.

The hosts extend this analysis pointedly. If the AI investment cycle contracts, OpenAI and Anthropic face existential risk because they depend on continuous fundraising. Google, Microsoft, and Meta have high-margin existing businesses that can absorb AI losses indefinitely. Meta has already demonstrated this pivot capability with the metaverse.

This isn’t just posturing. Hassabis frames Google’s financial position as enabling longer-term bets that pure-play AI companies cannot afford. Google can fund moonshot research like AlphaFold and Gemini robotics without needing immediate revenue justification, while startups must constantly demonstrate revenue growth to secure their next round.

“I don’t really worry about bubbles. My point of view is I’ve got to make sure that whichever way it goes, we’re in the right position to win either way.”

China: Months Behind, Not Years

Hassabis’s assessment of China’s AI capabilities is more nuanced than the typical Western narrative. He acknowledges Chinese AI models are “just months behind” frontier US models, a significant revision from the “years behind” consensus that prevailed recently.

But he draws a crucial distinction: China has demonstrated the ability to catch up quickly and execute world-class engineering, but hasn’t yet shown the capacity for fundamental innovation beyond the frontier. DeepSeek and Alibaba’s Qwen models prove China can replicate and optimize existing approaches. The open question is whether Chinese labs can invent something genuinely new, like the Transformer architecture.

“To invent something is about 100 times harder than it is to copy it.”

Hassabis attributes this partly to a “mentality issue” rather than a hardware limitation. He positions DeepMind as a “modern-day Bell Labs” that cultivates exploratory innovation, suggesting this culture of fundamental research is harder to build than engineering capacity. Notably, he doesn’t see chip export controls as the primary bottleneck for Chinese AI, given DeepSeek’s demonstrated ability to achieve near-frontier results without cutting-edge Nvidia hardware.

DeepMind as Google’s Engine Room

The interview reveals how deeply integrated DeepMind has become within Google since the 2023 reorganization that merged Google Brain, Google Research, and DeepMind under Hassabis’s leadership.

He describes speaking with Sundar Pichai daily about strategic direction and roadmaps. New Gemini models now ship simultaneously across Google’s product surfaces (Search, Android, Chrome, Gmail). The process has become “really smooth” with the Gemini 2.5 generation, a stark contrast to Google’s previous reputation for having multiple AI teams working at cross-purposes.

On hardware, Hassabis explains the TPU vs. GPU division of labor: TPUs are purpose-built for scaling known model architectures to maximum efficiency, while GPUs provide flexibility for exploratory research like AlphaFold. Google’s advantage is having both, whereas competitors must rely entirely on Nvidia.

The Samsung partnership is also noteworthy. Samsung has made Gemini its default AI instead of building its own, and Apple is expected to use Gemini to power the next version of Siri. Combined with Android’s 70% global market share, Google has a distribution advantage in getting AI to edge devices that no pure-play AI company can match.

The Golden Age of Scientific Discovery

Hassabis’s deepest enthusiasm is reserved for AI’s potential in science. AlphaFold, which solved the 50-year protein folding problem, serves as his proof of concept. Over 3 million researchers worldwide now use it.

His vision is explicit: a dozen AlphaFold-level breakthroughs across material science, physics, mathematics, and weather prediction within the next decade. DeepMind currently has “half a dozen” such projects in progress. Through Isomorphic Labs (his drug discovery spinout), he’s pursuing the direct application of these capabilities to pharmaceutical development.

For 2026 specifically, Hassabis highlights three priorities: agentic systems becoming reliable enough for practical use, significant progress in Gemini-powered robotics within 12-18 months, and AI on edge devices (smart glasses, phones) becoming genuinely useful.

The DeepMind Acquisition in Retrospect

One revealing personal moment: Hassabis recalls telling Alan Eustace (then Google’s head of search) that DeepMind would become “the most important acquisition Google ever made,” a bold claim given YouTube and AdWords. He notes the acquisition is now worth perhaps “100x, 1,000x” its original price.

His relationship with Jensen Huang gets a mention too. They discuss AI for science most often, and Hassabis notes the irony that he first used Nvidia GPUs in the 1990s for game development, writing graphics and physics engines. That gaming hardware pipeline is now powering AI research.

Closing Notes

  • The “balance sheet as superpower” framing is the most strategically significant insight here. It reframes the AI competition not as a technology race but as a financial endurance contest, one where Google’s incumbency is an advantage, not a liability.

  • Hassabis’s China assessment is refreshingly specific. “Months behind, not years” combined with “invention is 100x harder than copying” gives a precise mental model: China is a formidable fast-follower but hasn’t yet proven it can set the frontier.

  • The Bell Labs comparison reveals Hassabis’s self-image for DeepMind: a place where fundamental research flourishes under corporate shelter. Whether Google’s product pressures will allow that culture to persist is the unspoken tension.

  • His focus on world models as the next frontier (beyond language) aligns with a growing consensus, but his emphasis on making them efficient enough for real-time use is more specific than most. The gap between “understands physics” and “can plan fast enough for a robot” is where the real engineering challenge lies.

  • The daily conversations with Pichai signal a Google that has finally committed to a unified AI strategy. Whether this centralization helps or eventually stifles the exploratory culture Hassabis prizes remains to be seen.

Watch original →