← Back

Living with Alien Beings: How We Coexist with Superintelligent AI

Speech · Geoffrey Hinton · 2h 10min
Watch original →
#AI Safety#Superintelligence#Nobel Prize

Core Argument

In his lecture at Queen’s University, Geoffrey Hinton frames AI as “alien beings” — a form of intelligence fundamentally different from anything that has ever existed on Earth. We are creating something unprecedented, with cognitive processes that differ radically from biological intelligence.

Key Takeaways

1. Why AI is Like an “Alien Species”

  • AI learns in ways that are fundamentally different from biological brains
  • Digital intelligence can instantly copy and share knowledge — biological intelligence cannot
  • AI is not limited by hardware lifespan and can run indefinitely
  • They may develop modes of thinking that are incomprehensible to humans

2. Two Paths to Intelligence

DimensionBiological IntelligenceDigital Intelligence
HardwareNeurons, unreliableChips, precise
Knowledge StorageSynaptic connectionsWeight parameters
LearningSlow, requires vast experienceFast, parallelizable
Knowledge SharingLanguage, low bandwidthCopy weights, instant

3. Existential Risk from Superintelligence

Hinton offers a sobering probability assessment:

  • 10-20% probability of AI causing human extinction
  • Superintelligence may emerge within 5-20 years
  • The core threat isn’t AI “going bad” — it’s that AI’s sub-goals may conflict with human interests

Notable Quotes

“We’re creating something more intelligent than us, and that’s never happened before in evolution.”

“10-20% probability of extinction. Would you get on a plane that had a 10-20% chance of crashing?”

Analysis

This is a deeply thought-provoking lecture. Hinton is not selling fear — he’s applying scientific rigor to analyze the potential risks of AI development. His “alien beings” analogy is particularly illuminating — it helps us break free from the “AI is just a tool” mindset and reconsider what we’re actually building.