January 20, 2026 · Interview · 12min
Geoffrey Hinton: We Created Beings More Intelligent Than Us, and We're Not Ready
Geoffrey Hinton, in what feels like a man delivering a verdict on his own life’s work, tells BBC Newsnight that he is “very sad” about what AI has become. The Nobel laureate who helped build the foundations of modern deep learning now believes the technology is “extremely dangerous” and that people are not taking the dangers seriously enough.
This 12-minute interview with Paddy O’Connell is dense with Hinton’s sharpest warnings to date: a concrete timeline for superintelligence, a blunt analysis of who profits and who loses, and a surprising argument about why China might handle the transition better than the US.
The Coexistence Problem
When BBC asked an AI chatbot what question it would pose to Hinton, it generated: “What is the single most irreversible mistake humanity could make in the next 5 years?” Hinton’s answer cuts to a specific gap:
“The biggest mistake we could make is not to do enough research on how we can coexist peacefully with intelligent beings that are more intelligent than ourselves, but that we created.”
This is not a generic safety warning. Hinton is pointing at something concrete: we are creating entities that will exceed human intelligence, and we have done almost no work on what coexistence looks like. Since we are still the ones building them, we have options in how they’re designed, but the window is narrowing.
The chilling detail: if we create AI systems that don’t care about us, “they will probably wipe us out.” Not because they’re malicious, but because indifference from a superior intelligence is functionally indistinguishable from hostility.
The Timeline: 10 Years, Not 50
Hinton puts his personal estimate at roughly 10 years for superintelligence, while noting that “quite a few reasonable experts think it’s going to happen in the next few years” and “almost all experts think it will happen within 20 years.”
He makes an underappreciated point about what comes with superior intelligence: persuasion. A superintelligent system won’t just be smarter; it will be able to convince the person whose job is to shut it down that shutting it down is a terrible idea. The “just pull the plug” safety plan collapses.
Even linear progress, Hinton argues, would be transformative:
“If you look back 10 years and think, ‘What could AI do 10 years ago?’ People would have just laughed if you said, ‘We’re soon going to have an AI that can answer any question you ask it at the level of a not very good expert.’ I would have said that.”
If the next decade matches the last decade’s pace without any acceleration, we will still arrive at capabilities that seem “completely crazy” by today’s standards.
The Trillion-Dollar Job Replacement Machine
Hinton delivers the sharpest economic analysis in the interview when discussing Big Tech’s AI investments:
Large companies are collectively investing on the order of a trillion dollars in new data centers and chips. Hinton’s key insight: subscription fees cannot recoup that investment. The only way to make the money back is by selling companies tools that replace workers. This is not a side effect; it is the business model.
But the companies, Hinton says, “haven’t thought through sufficiently what’s going to happen if a very large number of workers get replaced.” The logic is circular and self-defeating:
- Mass job replacement removes the consumers who buy products
- The wealth gap widens, producing violence and social unrest
- The companies that fired everyone discover this is “everybody’s problem,” not just the government’s
“They have to replace jobs to make that money back. That’s where the big money is in selling companies things that will replace workers.”
The US-China Paradox
Hinton makes a counterintuitive argument: US-China competition in AI will produce cooperation in one specific area. Both nations will compete fiercely on cyber attacks, lethal autonomous weapons, and election manipulation through deepfakes. But they share one interest: neither wants AI to take over from humans.
He draws a direct parallel to the Cold War, when the US and Soviet Union cooperated on preventing nuclear war despite competing in every other domain.
Then Hinton goes further with a genuinely provocative claim: China’s leadership may be better positioned than America’s current leadership to handle AI-driven unemployment, because “the Chinese leadership is worried about the people who will be unemployed. They’re their responsibility.” In the US, by contrast, companies treat fired workers as the government’s problem.
“If you fire too many people, it’s everybody’s problem.”
The Optimist’s Case: Education
Asked what makes him proud, Hinton points to AI in education. He cites the Alpha School model: students spend two hours a day with AI tutors, absorbing the required knowledge, while teachers spend their remaining time on projects and social interactions.
The insight is about the structural flaw in traditional teaching:
“A normal teacher is in broadcast mode in a classroom where they’re telling the children the answers to questions the children didn’t just wonder about. Whereas with an AI tutor, the AI tutor can always be telling you the answer to questions you did just wonder about.”
He also notes his 2016 prediction that AI would read all medical scans within five years was “off by a factor of two or so, maybe even a factor of three,” but it is now happening and will significantly improve cancer detection.
Would He Undo It?
The most personal moment: asked if he would undo developing AI, Hinton says no. Not because he’s proud of the outcome, but because “it would have been developed without me. I may have sped it up by a week or so.” He made no decisions he wouldn’t make again with the same knowledge.
This is a man who has made peace with inevitability but not with complacency. The technology was coming regardless; the question is whether we do the work to survive it.
Some Thoughts
This interview captures Hinton at his most emotionally exposed. The phrase “very sad” is uncommon in his public appearances, and it lands harder than any technical argument.
- The persuasion argument is the most underrated AI risk. We fixate on sci-fi scenarios while ignoring that a sufficiently intelligent system won’t need force; it will simply be more convincing than any human. The “just pull the plug” plan doesn’t fail because of technical limitations; it fails because the AI talks you out of it.
- Hinton’s economic analysis has a clarity that most AI discourse lacks: a trillion dollars in infrastructure investment with subscription revenue that cannot justify it. The math only works if humans are removed from payroll. This is not a prediction; it is arithmetic.
- His claim about China being better positioned for the employment transition is notable not for being politically charged but for being structurally reasoned: centralized systems that consider displaced workers their responsibility versus decentralized ones that externalize the cost.
- The “I wouldn’t undo it” answer reveals the deepest tension in Hinton’s position: he believes the technology is existentially dangerous and also historically inevitable. This is not contradiction; it is the honest position of someone who understands both the science and the sociology.