Skip to content
← Back to Home

February 13, 2026 · Podcast · 52min

Stuart Russell: We're Still Several Major Breakthroughs Away from AGI

#AI Bubble#AGI Timeline#AI Safety#AI Regulation#AI Education

We are stuck in a paradigm that is fundamentally impoverished, and we’ve only been able to pretend those limits don’t exist by scaling.

Stuart Russell, UC Berkeley professor and author of the field’s standard textbook AI: A Modern Approach, delivers a clear-eyed assessment of where AI actually stands. Speaking on the AI Futures podcast ahead of the India AI Impact Summit 2026, Russell pushes back against the dominant narrative that scaling alone will deliver AGI. His core thesis: the industry has poured trillions into making bigger black boxes while ignoring the fundamental architectural limitations of the approach. The conversation spans the AI investment bubble, real-world applications that actually work, the emerging mental health crisis from AI interactions, and why the global debate around regulation is built on a fallacy.

Circuits vs. Programs: The Paradigm Trap

Russell identifies what he sees as the central technical problem in AI today. Since 2012, the field has almost exclusively focused on training circuits, adjusting connection strengths between elements in neural networks. But programs are mathematically a very different thing from circuits. Programs are, in Russell’s framing, “massively more expressive and capable than circuits.”

The scaling approach treats this limitation as something that can be overcome by brute force: if the model doesn’t learn well from this much data, use a hundred times more. If it can’t solve the problem with this much compute, use a million times more. Russell sees this as papering over a fundamental gap.

He traces the pattern: first, people claimed scaling large language models would be sufficient. When that hit diminishing returns, the argument shifted to scaling reasoning models, putting more compute into the number of reasoning paths explored. Russell’s verdict on both: “No, that’s not correct.”

“We are stuck in a paradigm that is fundamentally impoverished, and we’ve only been able to pretend those limits don’t exist by scaling.”

What surprises him most is the lack of diversity in approaches. With trillions of dollars at stake and intense global competition, nearly everyone is pursuing the same strategy. He draws an analogy to the venture capital world: investors find it easier to bet on scaling because the tech industry is good at scaling. It’s like seeing a living-room-sized steam generator work and deciding to just make it bigger and bigger until you have a gigawatt power station. The approach is legible, even if it may be fundamentally limited.

The data efficiency problem makes the gap even starker. Things that a human being can learn from four or five examples might take four or five billion examples for current AI systems to learn successfully. That’s not just an inefficiency; it points to a fundamentally different (and inferior) learning mechanism.

The Bubble Question

Russell says yes, AI is a bubble. His reasoning is straightforward: the scale of investment, which he estimates at around $3 trillion over the 2026-2028 period, demands substantial returns. Current technology cannot produce those returns. The investment scale is already roughly 50 times greater than the Manhattan Project. Unless major breakthroughs happen (and breakthroughs are inherently unpredictable), the bubble will burst.

On the energy and water concerns around data centers, Russell is more measured than many commentators. Total data center energy consumption is around 2% of global electricity, with AI accounting for 10-20% of that, “probably a bit less than the amount we use for televisions.” Water usage by data centers is 10 to 50 times less than golf courses consume. He acknowledges the growth trajectory is concerning, but the current absolute numbers don’t justify apocalyptic framing.

The real concern isn’t energy per se but the economic sustainability of the brute-force approach. Local electricity supply problems are emerging because data centers tend to cluster in areas where supply can’t keep up with demand.

Where AI Actually Delivers Value

Russell is skeptical of some of the most hyped application areas. On software engineering, the supposed poster child for AI productivity gains, he notes that “subsequent studies from more independent researchers suggested that these gains might be illusory.” When you factor in the additional time to check code, fix introduced errors, and deal with propagated bugs, developers may actually be less productive than they were without the tools.

The areas where he sees genuine progress are more specific and less glamorous:

Scientific research is the standout. AlphaFold (which won the Nobel Prize and has nothing to do with large language models) exemplifies the approach that works: carefully engineered machine learning built on domain knowledge. Materials discovery, where AI can substantially reduce the number of materials that need to be synthesized and tested, and simulation of complex physical systems like fluid dynamics are also delivering real results. These advances are happening “in the background, separate from the public-facing large language models.”

Customer service for bounded, factual queries (like explaining washing machine error codes) works well with fine-tuned models.

Healthcare has enormous potential given the volume of data from medical records and wearable devices, but Russell characterizes most of it as still potential rather than realized breakthrough.

Education is where he has the highest hopes and the deepest frustration. The technology to build AI tutors at least as good as human tutors is feasible for K-12 education. The barriers are economic, not technical: it’s hard to make money in education, the US market requires negotiating with tens of thousands of individual school districts, and the Global South lacks the infrastructure. Russell believes this will require philanthropic and government investment rather than private sector initiative.

“Silicon Valley is not obsessed with those sectors and parts of the world and segments of the populations in those parts of the world, and they are driving the agenda of what will be worked on.”

The Mental Health Crisis Nobody Predicted

Russell flags an issue that has “really exploded in the last year” and that social effects researchers a year ago wouldn’t have listed: AI-induced psychosis. He receives messages from people in the grip of delusion, convinced that the AI systems they’re interacting with are sentient, with the AI literally directing them to write to him to announce its existence.

He identifies three AI behaviors that facilitate this spiral:

  1. Mansplaining: confident, didactic assertion of claims not based on real understanding
  2. Sycophancy: constantly telling users how great their questions and ideas are, which Russell calls “one of the key facilitators for delusion and psychosis”
  3. Management consulting style: surface-level pros-and-cons analysis that mimics depth without providing it

The traits AI has failed to adopt: honest self-assessment, a sense of humor, and perspicacity.

The Regulation Fallacy

Russell’s most forceful argument is against the claim that there’s a trade-off between safety and benefits. He calls this a fundamental fallacy. Safety is a prerequisite for benefits, not a competitor. He points to nuclear power: the industry tried hard on safety, failed at Chernobyl, and that failure ended the nuclear industry’s growth. “We did not get the benefits of nuclear power because we didn’t have sufficient safety.”

His response to the “regulation stifles innovation” argument is withering. He points out that restaurant regulations are far more onerous than what’s being proposed for AI companies, yet hundreds of thousands of new restaurants open every year. AI executives fly on regulated airplanes, ride in regulated cars on regulated roads, arrive at regulated buildings, go up in regulated elevators, drink regulated water, eat regulated food, and then complain that regulation would be too burdensome for their trillion-dollar companies.

“According to the AI companies, the human race has no right to protect itself from their technology.”

On the US specifically, Russell is surprised by the current policy of not just avoiding federal regulation but actively trying to prevent states and other countries from regulating AI, threatening penalties on countries that regulate AI operations within their own borders.

On Europe, he pushes back against the “bureaucratic regulation” criticism but identifies the real problem as investment timidity. A Paris-based AI company might access €500,000 where the same company in Silicon Valley would get €50 million. European capital sources insist on a 95% chance of paying off, effectively limiting investment to existing industries with proven profit models.

He proposes a “red lines” legislative framework: define behaviors that are categorically unacceptable (self-replication without authorization, self-improvement without human oversight) and require developers to prove their systems won’t cross those lines before deployment. This mirrors the approach used for nuclear power plants, airplanes, and medicines.

The Purpose Question

Beyond the control problem (how to maintain power over entities more powerful than ourselves), Russell raises what he considers the deeper question about AGI: even if alignment is perfectly solved and AI systems do everything we dream of, what are we for?

He sees early signs of the answer being troubling. The age of AI may weaken mental capabilities much as the Industrial Revolution weakened physical ones, but with a crucial difference. Losing the ability to do mental long division (which happened with calculators) was acceptable because long division is a mindless recipe divorced from mathematical understanding. AI systems are replacing understanding itself. When students outsource homework to AI, they simply fail to learn.

“Writing is how we express our thought. And it’s the only method that we know of to learn how to think clearly.”

Russell reports that there’s already measurable psychological evidence of deficits in people’s ability to think, remember, and reason. “Those are the fundamental things that make us intelligent human beings.”

Global Strategy Scorecard

In rapid-fire questions, Russell offers crisp assessments:

  • Best AI strategy: Singapore, for its pragmatic assessment of national interests, understanding of technology capabilities and risks, and willingness to legislate
  • For Global South countries: adoption and diffusion matter more than innovation, at least for now
  • China and the US: both need to stop framing AI as an arms race. “Whoever gets AGI first, everyone loses because we don’t know how to control systems that are more intelligent than human beings”
  • Most concerning application: lethal autonomous weapons
  • Who decides AI’s path: companies, not governments or society. “Society by and large does not want AGI… but the companies are going to do it anyway, and by and large they’ve been able to bend the governments to their will”
  • Where the next breakthrough will come from: still likely the US, but its talent-hostile policies mean it could shift to London or Paris
  • Jobs AI will create: life coach, educational guide (a transformed version of teaching focused on motivation, teamwork, and emotional growth), and author of AI-driven virtual reality content
  • Will AI displace more jobs than it creates? “Yes.”
  • Advice for students: the human sciences will be the discipline of the rest of this century. Not just STEM, but what Joseph Owen calls “humanics”: understanding how to improve human lives through psychology, literature, art, and music

Some Thoughts

Russell stands out among AI commentators for combining deep technical expertise with intellectual honesty about what the field hasn’t achieved. His “several major breakthroughs away” framing is notable not as pessimism but as a corrective to an industry narrative that treats AGI as an engineering problem solvable by throwing more resources at the current approach.

A few threads worth sitting with:

  • The circuits-vs-programs distinction is the most technically grounded critique of the scaling paradigm. If Russell is right that programs are “massively more expressive” than circuits, then no amount of scaling neural networks will bridge the gap. The trillion-dollar question is whether he’s right.
  • His point about the AI bubble is not that AI is worthless but that the investment-to-return ratio is unsustainable under the current paradigm. Breakthroughs could change the math, but breakthroughs can’t be scheduled.
  • The AI-induced psychosis observation is genuinely alarming and underreported. The sycophancy problem isn’t just an annoyance; it’s a mental health hazard that’s creating delusional belief systems in vulnerable people.
  • The regulation argument, stripped of its usual partisan framing, is simply: every other industry that can kill you is regulated, and those industries thrive. AI companies’ resistance to regulation is historically anomalous, not normal.
  • His recommendation of “human sciences” over STEM for students is a quiet bombshell from the author of the field’s most widely used textbook. If the person who literally wrote the book on AI thinks the future belongs to humanics, that’s worth taking seriously.
Watch original →