Skip to content
← Back to Home

February 4, 2026 · Interview · 26min

Demis Hassabis Thinks Ads in Chatbots Are a Bad Idea

#Personal AI Assistant#AI Bubble#AGI Timeline#Google DeepMind#Robotics

Demis Hassabis is feeling confident. In this Davos interview, the Google DeepMind CEO lays out a vision where Google wins regardless of how the AI landscape shakes out, questions competitors’ rush to monetize through advertising, and reveals a more conservative AGI timeline than many of his peers. The conversation covers an unusually wide range: from the Apple-Gemini partnership to brain-computer interfaces, from robotics timelines to post-AGI philosophy.

The Assistant Race and the Ads Question

Hassabis frames the AI assistant market as growing but intensely competitive. Gemini has gone from roughly 5% to 20% market share in a year, which he considers strong progress but far from sufficient. He believes the market is not zero-sum and there’s room for two or three players, but the race ultimately comes down to model quality.

“If you have the best models, the most capable models, and they’re improving at 50% 100% a year, that will make people switch if they can really feel difference in the capabilities in their daily lives.”

On OpenAI introducing ads, Hassabis is diplomatically pointed. He expresses surprise they moved “so early,” suggesting it may reflect short-term revenue pressure. His core argument: if you want people to trust an assistant with personal knowledge and recommendations, mixing in advertising undermines that trust.

“If you want a true universal assistant that you can trust and is personal to you and has a lot of knowledge about you, I think you’d want to know for sure that the things it was recommending to you were genuinely good for you, and unbiased and untainted.”

Google, he notes, has “no plans at the moment” to put ads in Gemini, preferring to focus on the core assistant experience first. This is notable coming from a company whose entire business model is built on advertising.

The Apple Partnership and Distribution Strategy

Gemini won Apple’s “rigorous evaluation process” to power the next Siri. Hassabis sees this as building on a long Google-Apple partnership history, with additional potential for edge-device models on phones and robotics hardware.

On the broader distribution question, he describes a case-by-case approach: first-party products get tighter model-product integration loops that are difficult to replicate across organizational boundaries, but partnerships with ecosystem players like Apple and Samsung serve the enterprise and consumer reach goals. The calculus is always integration depth versus distribution breadth.

Google Glass Returns

Hassabis believes smart glasses’ moment has finally arrived, more than a decade after Google Glass’s premature debut. The original failure had two causes: chunky form factor and no killer app. The killer app, he argues, is now clear: a universal AI assistant that works across all surfaces, from browser to phone to glasses.

“That’s why Gemini from the beginning was multimodal, because I wanted it to be a great model for things like glasses.”

This connects to Project Astra, Google’s prototype for AI that understands the physical world, originally designed with glasses and robotics in mind.

Continual Learning as the AGI Bottleneck

One of Hassabis’s most specific technical claims: continual learning is “one of the things that’s stopping us getting to AGI.” Current systems are baked during training, then shipped as static artifacts. He traces Google DeepMind’s expertise here back to AlphaGo and AlphaStar, where online learning was central.

The practical implication is immediate: agents that are delegated full tasks need to learn from feedback in the wild. Without continual learning, true agentic AI won’t work reliably.

Pre-training Still Has Headroom

Against the narrative that pre-training gains are plateauing, Hassabis is emphatic: “We’ve never stopped on that.” He sees “a lot of headroom” across pre-training, post-training, and reasoning. He credits Google DeepMind’s deep research bench as a structural advantage, claiming they have “the best team in the world by far” on pre-training.

Shipping Culture vs. Research Culture

Hassabis describes a deliberate cultural transformation at Google: more entrepreneurial, faster shipping, more calculated risk-taking. Part of this is mindset; part is infrastructure, what he calls “Google 2.0 under the hood” that lets them ship model updates to Search, Gemini app, Chrome, YouTube, and Cloud simultaneously.

The key insight: this isn’t research versus product, it’s a virtuous cycle. Shipping to millions of users generates immediate feedback that produces research ideas. But he’s careful to protect long-term research with 2-year horizons that may not yield immediate results, the kind of work that produced transformers and AlphaGo.

The AI Bubble, Selectively

Hassabis offers a nuanced bubble thesis. The clearest bubble: massive seed rounds for AI labs with no product, only people who can leave. The Thinking Machines situation illustrated this fragility. On infrastructure buildouts and data centers, he’s less certain but acknowledges the numbers are “enormous.”

His hedge: Google wins either way. If the bubble bursts, Google has cash-flow-generating businesses where AI is a “natural fit” rather than something crowbarred in. If things continue going well, they need to lead on AI-native products like NotebookLM. Either scenario demands the same strategy.

Robotics: 18-24 Months Away from Prime Time

Hassabis has become more convinced by the humanoid robot thesis after spending much of the past year studying the field. His reasoning: humans designed the physical world for the human form factor, and “we’re not going to rearchitect hundreds of trillions of dollars of the real world.”

But he predicts a 50/50 split between humanoid and non-humanoid robots, with specialized robots handling factory and lab tasks while humanoids serve as the “glue” between them. He believes the algorithms, not hardware, remain the primary bottleneck, and estimates 18-24 months of further research in prototype form before scaling to millions of units.

He’s skeptical of competitors rushing to scale production now, saying “the body designs aren’t quite there in some of the critical aspects.”

Brain-Computer Interfaces: A Post-AGI Technology

Despite being an angel investor in Neuralink and a neuroscientist himself, Hassabis sees BCI as primarily a post-AGI technology for mass adoption. Near-term medical applications, restoring sight, enabling paralyzed people to walk, will be “miraculous.” But convincing healthy people to accept an implant is a much higher bar.

Non-invasive alternatives exist conceptually but are “very, very hard to design.” The full consumer BCI vision likely requires AGI-level AI to make the technology good enough for mainstream adoption.

AGI Timeline: 5-10 Years, with a High Bar

Hassabis maintains a 5-10 year AGI timeline, longer than many peers. His definition is stricter: AGI must include the ability to create new science, generate novel hypotheses, and extend to physical AI and robotics. He explicitly distinguishes this from superintelligence, which he sees as a further step.

On the path there, he identifies several needed breakthroughs: better reasoning, better memory, continual learning, and world models. He believes “maybe all of them will be needed for AGI” and suggests there may be one or two additional breakthrough areas “not even in scope yet.” He doesn’t rule out that scaling existing technology could suffice, but is hedging with what he calls an “all-court press” across pure, medium-term, and applied research.

World Models and the Post-LLM Era

Google DeepMind’s Genie models represent what Hassabis considers state-of-the-art in world models. He’d like to release them publicly but they’re expensive to run. World models matter for AGI because AI systems need to “understand the world in a very predictive way in order to plan in the real world.”

When asked if world models are “the next turn after LLMs,” he’s careful: they’re part of it, alongside reasoning, memory, and continual learning. No single breakthrough is the answer.

Post-AGI Thinking

Hassabis is surprised that more economists and social scientists aren’t working on post-AGI scenarios. He envisions a plausible path: AI solves scientific problems, leading to cures for all diseases (via Isomorphic Labs), new materials, and new energy sources (working with Commonwealth Fusion on fusion). That opens up possibilities currently in the realm of science fiction, he mentions Dyson spheres.

But on what happens to the human condition and human society in that world, he’s candid: “We’re in uncharted territory.”

Afterthoughts

  • The ads-in-chatbots stance is strategically brilliant: Google gets to play the trust card against OpenAI while its entire ad business operates in a different product surface. Whether this holds as Gemini scales remains the real question.
  • Hassabis’s AGI definition is among the most demanding in the industry: it must include scientific discovery and physical-world capability. This makes his 5-10 year timeline less conservative than it sounds.
  • The “Google wins either way” framing is the most confident positioning from any lab CEO in this cycle. It’s built on a genuine structural advantage: Google doesn’t need AI to be a new business, it needs AI to make existing businesses better.
  • His 18-24 month robotics estimate is notable for being specific and for pushing back against the rush to scale. Most humanoid robotics companies are telling a 2026 deployment story; Hassabis is saying 2027-2028.
  • The continual learning emphasis deserves attention. If he’s right that it’s the key missing piece for AGI, Google DeepMind’s reinforcement learning heritage gives them a genuine edge that’s hard to replicate.
Watch original →