Skip to content
← Back to Home

January 20, 2026 · Interview · 26min

Hassabis on an AI Shift Bigger Than the Industrial Age

#AGI Timeline#Google DeepMind#Physical Intelligence#China AI#AI Governance

The Cautious Optimist at Davos

Demis Hassabis’s defining trait in this conversation is his willingness to disagree with almost everyone. He pushes back on Yann LeCun (“Transformers aren’t dead ends, that’s clearly wrong”), on Ilya Sutskever (“we never left the age of research”), on Elon Musk (“the singularity? very premature”), and on Dario Amodei’s five-year job displacement forecast (“my view on that would be a lot longer”). For a Davos interview, where consensus-building is the default mode, this four-front intellectual independence is striking.

Bloomberg’s Emily Chang sat down with the Google DeepMind CEO and Nobel laureate at Bloomberg House during the 2026 World Economic Forum. In 26 minutes, they covered Google’s competitive trajectory, the path to AGI, robotics, China, and what happens to human purpose when machines can do everything. What emerges is a portrait of someone who thinks in longer timelines than his peers but believes the destination is more radical than most imagine.

Google’s Full-Stack Bet

Hassabis frames Google’s advantage as structural, not incremental. Google DeepMind invented “about 90% of the breakthroughs” the modern AI industry relies on: Transformers, deep reinforcement learning, AlphaGo. But raw research isn’t the moat. The real edge is the full stack: TPU hardware, data centers, cloud business, a frontier lab, and billions-scale product surfaces from Search to Gmail to Chrome.

“It’s ferociously competitive out there. Maybe the most intense competition has ever been in technology.”

The last three to four years have been “unbelievably intense,” with 100-hour weeks, 50 weeks a year as the norm. The internal culture shift matters: blending “startup energy” with big-company resources while protecting space for long-term exploratory research, not just work that ships in three months. Hassabis explicitly called it “a mistake” to only fund research with near-term product payoffs.

Google co-founders are deeply re-engaged. Larry Page focuses on strategy and board-level decisions. Sergey Brin is hands-on with the Gemini team, down to algorithmic coding details.

Jagged Intelligence and the 95% Problem

This is the most important concept in the interview. Hassabis describes current AI as having “jagged intelligence”: excellent at certain things, poor at others. The implication is precise: to delegate an entire task to an AI agent, it needs to handle 100% of it reliably. Being good at 95% isn’t enough.

“It’s no good for it to be good at 95% of that task. You need it to be good at the whole task for you to be able to actually just sort of fire and forget on it.”

This is why he pushed back on Amodei’s prediction that AI will wipe away 50% of entry-level white-collar jobs in five years. Today’s systems are “assistive programs,” not autonomous agents. The gap between assistance and delegation is where the real work remains.

The framing has implications beyond jobs. If AGI requires not improvement in any single capability but the elimination of this jaggedness, then the research agenda looks very different from “make models bigger.”

AGI by 2030: 50/50, with a High Bar

Hassabis maintains his 50% probability of AGI by 2030. But his definition is stricter than most: a system exhibiting all cognitive capabilities humans have. He called out specific missing pieces:

Scientific creativity is the big one. Not solving conjectures, but generating hypotheses. “Finding the right question is actually often way harder than finding the answer.” Current systems definitely can’t do this.

Continual learning: models are static after training; they need to learn on the fly, in the real world.

Consistency: the jagged intelligence problem again. A general system shouldn’t have these arbitrary edges.

On whether Transformers alone get us there, he puts it at 50/50. LLMs will be “a massively important component of the final system,” but possibly not the only one. He estimates fewer than five additional breakthroughs may be needed, including world models (Google DeepMind’s Genie system, which he works on directly) and better long-term planning.

“I could imagine there are one or two breakthroughs, maybe a small handful, less than five, that still need to happen from here.”

100x the Industrial Revolution

His disruption framing is specific. The AI transformation will be “ten times bigger and ten times faster” than the Industrial Revolution. The Industrial Revolution took about two generations. This one won’t give us that luxury.

But Hassabis is more interested in the far side of that disruption. With AGI, he sees a “post-scarcity world” unlocked by AI-driven breakthroughs in energy (potentially fusion), new materials, and scientific discovery. And then comes the part most tech leaders dodge:

“The thing I worry more about than the economics is what about purpose and meaning that a lot of us get from our jobs and scientific endeavors.”

He thinks we’ll need “some new great philosophers” to navigate that transition, not just economists and regulators. Personally, he plans to use post-AGI capabilities to explore fundamental physics: the nature of reality, consciousness, the Fermi paradox, what time actually is.

Physical Intelligence: 18 Months Away

Hassabis spent much of 2025 investigating robotics and believes we’re “on the cusp of a breakthrough moment in physical intelligence,” though he puts it at 18 months to two years out. Three specific bottlenecks:

  1. Algorithms need more robustness with less data. Physical-world synthetic data is much harder to generate than digital.
  2. Robot hardware, especially hands. Studying robotics gave him “a newfound appreciation for the human hand” and how exquisite evolution’s design is in reliability, strength, and dexterity.
  3. Data scarcity: you can create synthetic data for digital models far more easily than for physical tasks.

Google DeepMind’s collaboration with Boston Dynamics is starting with automotive manufacturing, with prototype deployments expected over the coming year. Gemini was designed multimodal from the start, both for a universal assistant (on glasses or phone) and for robotics applications.

China: Six Months Behind, but Not Yet Innovating

Hassabis dismissed the “cataclysmic” framing of DeepSeek’s emergence as “a massive overreaction in the West.” His assessment is more measured than most frontier lab leaders:

Chinese companies, particularly ByteDance (which he singled out as the most capable), are “maybe only six months behind, not one or two years behind the frontier.” But he flagged two important caveats:

  • Some of DeepSeek’s claims about minimal compute usage were exaggerated because they relied on Western models and fine-tuning on leading Western model outputs. “It wasn’t sort of de novo.”
  • Chinese companies have yet to demonstrate they can “innovate beyond the frontier” rather than catching up to it. They’re gaining fast, but original breakthroughs haven’t appeared.

A CERN for AI

His most ambitious proposal: an international, CERN-like institution for the final steps toward AGI. Not just technologists, but “philosophers, social scientists, and economists” collaborating on what humanity actually wants from this technology.

He acknowledged the obvious obstacle: “international cooperation is a little bit tricky at the moment.” Without that, he falls back on peer-based cooperation between lab leaders. He claimed to be “on pretty good terms with pretty much all the other leaders of all the leading labs,” citing close work with Anthropic on safety and security protocols.

Why Google Deserves Trust

When pressed on trust, Hassabis pointed to Google’s origins as a PhD project and the scientific culture embedded from day one. The board composition is his evidence: John Hennessy (Turing Award winner) as chair, Frances Arnold (Nobel laureate) as a member. “These are unusual people to have on a corporate board.”

On whether AI should get the Nobel Prize for a discovery, he’s firm: humans, for now. AI is “the ultimate scientific tool,” a better telescope or microscope, not an autonomous discoverer. The near future is collaboration: human scientists providing hypotheses, AI providing superhuman pattern matching.

Some Thoughts

This interview is most revealing for the gap between Hassabis and his peers on timelines and disruption speed. Where Amodei talks about massive near-term job displacement, Hassabis consistently bets on longer timelines and harder unsolved problems.

  • The “jagged intelligence” concept deserves more attention than it gets. The gap between 95% task completion and 100% is where most real-world economic value lives, and Hassabis thinks we’re further from closing it than the hype suggests.
  • His 100x Industrial Revolution claim isn’t hyperbole by his own math: 10x magnitude times 10x speed. If even directionally correct, every institution should be treating this as an existential-level strategic shift.
  • The meaning crisis callout is rare among tech leaders. Most default to “new jobs will emerge” or “people will find hobbies.” Hassabis is honest that this is an unsolved philosophical problem, and that honesty itself is informative.
  • His China assessment is the most calibrated take from a frontier lab leader: serious and competitive, but the innovation gap is real and underappreciated in both directions.
Watch original →