Skip to content
← Back to Home

February 19, 2026 · Interview · 39min

Demis Hassabis on AGI, Scientific Taste, and Why Games Still Matter

#AGI#Scientific Discovery#Game Development#AI in India

Demis Hassabis sat down with host Varun Mayya and IISc Director Govindan Rangarajan at the Indian Institute of Science in Bangalore. The conversation ranged from AlphaFold’s implications for Indian pharmaceutical manufacturing to why forgetting might be the missing piece in AI memory systems. What emerged was a portrait of Hassabis as someone who still thinks like a game designer: obsessed with finding the right level of abstraction, the right problem to solve, and the right balance between exploration and exploitation.

The AlphaFold Pipeline: From Protein Structure to Indian Pharmacies

Hassabis framed AlphaFold not as a finished product but as “the beginning.” The system solved one component of drug discovery, the protein structure prediction problem, which had been a 50-year grand challenge. But structure prediction is only one piece. Isomorphic Labs, the spinoff built on AlphaFold’s foundation, is now tackling the harder adjacent problems: compound binding, toxicity prediction, absorption properties, and side effect modeling.

The ambition is staggering in its specificity: reduce the average drug development timeline from 10 years to months, possibly weeks. Hassabis acknowledged this sounds like science fiction but pointed to AlphaFold itself as precedent. Predicting the structures of all 200 million known proteins “would have seemed impossible 10 years ago.”

For India specifically, Isomorphic Labs already collaborates with contract research organizations in the country. Combined with Gemini’s potential to democratize healthcare information access, the path from DeepMind research to Indian patients runs through multiple channels.

Scientific Taste: The Hardest Thing for Machines to Learn

The most philosophically rich segment centered on “scientific taste,” the ability to choose the right problems. Rangarajan and Hassabis offered complementary perspectives that together paint a nuanced picture.

Rangarajan proposed an apprenticeship model: train a custom LLM under a master scientist’s direct feedback, the way graduate students learn taste from their PhD advisors. He noted that standard RLHF “tends to average things out and you revert to the mean,” which is precisely the opposite of taste. His vision was provocative: future AI systems might trace their intellectual lineage to specific human mentors, creating “schools” of AI thinking, “the Demis school” or “the Rangarajan school,” each with distinct intuitions about which problems matter.

Hassabis broke taste down into intuition and creativity, calling it “probably the hardest thing for machines to be able to mimic.” His key insight was that taste cannot be passively learned. It requires active experimentation, the same kind of hands-on, trial-and-error process that graduate students go through. He drew a parallel to AI development itself: machines will need to do active experimentation at the frontier of knowledge, not just consume training data, to develop anything resembling scientific judgment.

“Every one of you here is of course great technically, but then to really discover something new, ask the right question, formulate the right hypothesis, requires good taste.”

Gaming’s New Golden Era

Hassabis traced his AI career back to working at Bullfrog Studios at age 16, where he saw how much people enjoyed interacting with game AI in titles like Theme Park. The full circle is striking: GPUs were invented for games, games pushed the frontier of graphics and AI, and now AI is mature enough to help build games.

On Genie 3, DeepMind’s world model, he was characteristically precise about its limitations. You can type a prompt and get a playable world, but it only stays coherent for about one minute, “like a dream, and then it disappears.” He estimated 4-5 years before the coherence window extends meaningfully. But he was clear that world models alone do not make games. Game design, game mechanics, the accumulated craft knowledge of the industry, all remain essential.

What excited him most was not the technical capability but the structural consequence: AI tools could bring back the conditions of the early 1990s, when small teams could experiment with creative ideas because prototyping was fast and cheap. This could enable entirely new genres, particularly AI-populated massive multiplayer worlds where NPCs actually advance storylines intelligently.

How to Actually Become a Polymath

The polymath discussion went beyond platitudes. Rangarajan attributed it simply to curiosity, the drive to explore different disciplines. But Hassabis offered a practical framework:

  1. Become world-class in at least one domain. This is non-negotiable. Without deep expertise somewhere, cross-disciplinary work becomes superficial.

  2. Develop techniques to rapidly learn other fields to graduate level. Transfer your learning methods, find connection points, understand new domains from first principles. This is the meta-skill.

  3. Look for intersections. DeepMind itself was born at the intersection of neuroscience, engineering, and machine learning. Isomorphic Labs sits at machine learning, chemistry, and biology. The fastest progress happens at these boundaries.

  4. Cultivate both confidence and humility. You need the confidence to be a beginner again in a new field when you are already an expert elsewhere, and the humility to actually learn from the experts there.

Hassabis pointed to a systemic barrier: academic reward structures do not encourage this. Departments silo, and the incentives push toward narrow specialization. He gently suggested that the Indian education system, with its separated specialized institutions, has made this problem worse. Rangarajan acknowledged this, noting that IISc is trying to remedy the situation by bringing medicine back into the institute, but admitted “it’s going to be a bigger issue with AI coming in.”

AGI: The Einstein Test

Hassabis’s AGI definition has remained unchanged for 20-30 years: a system that can exhibit all cognitive capabilities that humans can. He studied neuroscience specifically because the human brain is “the only existence proof we have” of general intelligence.

His proposed test was vivid and specific: train an AI system with a knowledge cutoff of 1911 and see if it could independently derive general relativity by 1915, as Einstein did. This is dramatically harder than current benchmarks. Today’s systems can win gold medals at the International Mathematical Olympiad but “fall over on relatively simple math problems if you pose it in a certain way.” This inconsistency, what he called “jagged intelligence,” is the tell that true generality is still missing.

He listed the concrete gaps: true creativity, continual learning, long-term planning, and general consistency across capabilities. Building foundation models like Gemini is, in his view, “on the shortest path to AGI,” but he estimated we are “still a few years away.”

The economic logic reinforces the scientific one: a general system that can transfer to specialized domains is more efficient than building hundreds of separate specialized systems. This is why the industry is converging on the same path.

The Missing Piece: Forgetting

The audience Q&A surfaced a fascinating thread on memory. Hassabis compared current context windows to a crude approximation of the hippocampus. Human working memory holds about seven items; Gemini’s context window holds a million tokens. But this brute-force approach has a fundamental problem: most tokens are irrelevant.

Human memory works differently. We remember emotional events, both positive and negative, better than neutral ones. This selective encoding serves a purpose: it filters for what is useful for future learning or future behavior. AI systems lack this. Project Astra, DeepMind’s glasses-based assistant, can record about 20 minutes of video (roughly a million tokens), but searching through all of it is expensive.

“Ironically, one of the things we maybe are missing is forgetting, or if you want to talk about in computer science language, garbage collection.”

The implication: AI memory systems need not just more capacity but better curation, some equivalent of the amygdala’s value judgment that decides what is worth retaining. Hassabis confirmed this is an active research direction at DeepMind.

Balancing Research and Revenue at Google DeepMind

On the tension between Google’s commercial pressure and DeepMind’s research mission, Hassabis described a roughly 50/50 split. Half his team works on immediate priorities, supporting Google products and Gemini development. The other half works on “the next frontier” with an 18-month to two-year horizon.

His job, as he framed it, is to “protect the blue sky research and make sure it has room to flourish.” The commercial work is not merely a tax; building foundation models like Gemini is itself on the path to AGI. But the balance requires active management, ensuring the organization is “not overly focused on the near-term.”

What’s Coming Next

Both speakers offered predictions. Rangarajan pointed to AI in mathematics: the public will be surprised when AI makes “tremendous progress” in a field perceived as populated by geniuses, because mathematics is built on axioms and definitions that make predictions verifiable.

Hassabis pointed to AI in the physical world. Robotics will “come of age in the next 2-3 years.” Gemini’s multimodal capabilities are being developed for glasses-based assistants that understand physical context. Self-driving cars are about to become a global reality. And automated labs could speed up not just theoretical scientific discovery but practical experimentation.

Closing Notes

  • Scientific taste as the ultimate human differentiator is a more hopeful framing than most discussions about AI and human value. If taste requires active experimentation and cannot be passively learned, then the value of human experience does not diminish with better AI; it becomes the bottleneck.
  • The apprenticeship model for AI taste is deeply counterintuitive. Training AI systems under individual mentors rather than averaging across all human knowledge suggests that the path to machine creativity might run through idiosyncrasy, not consensus.
  • Hassabis’s evolution metaphor, from assembler to C++ to Python to natural language, reframes AI coding tools not as job-killers but as the next step in a decades-long abstraction ladder. The craft changes; it does not disappear.
  • The forgetting insight connects to a broader pattern: the most important AI advances may come not from adding more capabilities but from building better filters, teaching machines what to ignore.
Watch original →