Skip to content
← Back to Home

February 20, 2026 · Interview · 1h

Sam Altman Unfiltered: 40 Questions on AGI, India, China, and the Future

#AGI Timeline#India AI#AI Democratization#Geopolitics#OpenAI

From high school math to frontier research in one year. That’s Sam Altman’s framing of how fast AI is moving, and it sets the tone for this hour-long rapid-fire session at the India AI Summit, where he bounces between geopolitics, existential risk, job disruption, and personal regrets with unusual candor.

Hosted by The Indian Express’s Anant Goenka at the Express Adda event in New Delhi, Altman fields 40+ questions covering everything from China’s real advantages to why he’d never ask ChatGPT how to be happy. The format, part interview, part game, draws out more unscripted moments than a typical keynote.

From High School Math to New Knowledge

Altman opens with a striking timeline of AI progress. A year ago, AI could handle high school math well but not brilliantly. By last summer, it was competing at the world’s hardest math competitions. Last week, on a new benchmark called FrontierProof where mathematicians posed 10 unsolved research problems, OpenAI’s latest model solved seven of them.

“AI has gone from doing okay at high school math to being able to do new research-level mathematics, figure out new knowledge.”

The same acceleration is happening in programming. A year ago, autocomplete tools impressed people. Now, with tools like Codex, you type in an idea and get an entire application. India, notably, is Codex’s fastest-growing market worldwide.

India: From Consumer to Builder

The biggest shift Altman notices since his last visit: India has moved from being an AI consumer to an AI builder. The startup energy, particularly at places like IIT Delhi, is “off the charts.” He spoke with PM Modi just before the event and says Modi is motivated to play at every layer of the AI stack, from chips to applications.

On his controversial $10 million comment from last year: he clarifies it was about frontier models, not AI development broadly. Frontier models have only gotten more expensive. But India’s narrower, specialized models are “incredible,” and with enough funding, an Indian company could absolutely build at the frontier.

The Jobs Question: History Rhymes

With 500 million Indians under 30 and 8% of GDP coming from IT services, the jobs question carries extra weight here. Altman doesn’t duck it but frames it historically:

“If you study history… people experiencing the Industrial Revolution… were shockingly wrong about what the new jobs would be. None of them were like, ‘I’m going to be the CEO of an AI company.’ Certainly none of them were like, ‘I’m going to be a YouTube influencer.’”

His view: change won’t be as fast as AI industry insiders predict because society has more inertia. But eventually the change will be huge. The skills that work regardless: fluency with AI tools, resilience, adaptability, figuring out what people want.

When asked what profession is most threatened, he picks his own. “I was trained as a software engineer and the way I learned to write software is now effectively completely irrelevant.” Writing C++ by hand is over. But the least vulnerable? Fine art. The price of human-generated art has gone up since AI image generation arrived, because people care about who makes it.

Computing Power: 8 Trillion GPUs

Altman poses a thought experiment: how many GPUs would you like working for you? No one says less than one. Some say a thousand. Multiply by 8 billion people and you get a scale we can’t possibly deliver soon. This will be “the most expensive, complex infrastructure project the world has ever collectively taken on.”

Data centers in space? He’s dismissive. Given launch costs versus Earth-based power costs, plus the reality of GPU failures, orbital data centers won’t matter at scale this decade.

The water consumption myth gets a sharp correction: the claim that ChatGPT uses 17 gallons per query is “completely untrue, totally insane, no connection to reality.” Evaporative cooling is no longer used. Energy consumption in total is real, though, and nuclear plus renewables need to be scaled urgently.

On energy efficiency, he reframes the debate: people compare AI training costs to a single human query, but training a human takes 20 years of food and energy plus the evolutionary cost of 100 billion humans who ever lived. On a per-query basis after training, AI may already be more energy-efficient than humans.

China: Ahead in Some Areas, Behind in Others

Altman resists the narrative of China as a monolithic threat. They’re clearly ahead in manufacturing, physical robots, electric motors, magnets, and energy buildout. The US and allies lead elsewhere. His framing: it’s always been mixed, and it will continue to be.

The imagined fear: a billion humanoid robots marching through streets. The real fear: a new kind of cyber warfare fought over the internet, involving influence operations and critical infrastructure hacking. On surveillance states, he’s pointed: “people are increasingly using fear of AI going wrong to justify a surveillance state” without thinking through the downsides.

Power Distribution: The Central Question

Asked whether AI will concentrate or fragment power, Altman calls it “one of the most important questions in front of us right now.” He sketches two extremes: one company or country holding all AI power, or everyone having unrestricted superintelligence. He favors something “way more towards the democratized version.”

The evidence he points to: one-to-three person startups are now achieving what required much larger companies, enabled by tools like Codex. This was impossible a few years ago.

“I feel increasingly radicalized on this point. I don’t think any other strategy is going to work.”

His message to every head of state, from Xi Jinping to Modi to Trump: democratize the technology. He acknowledges not all will listen.

AGI Is “Pretty Close,” ASI a Few Years Away

In a casual correction, Altman clarifies he said superintelligence, not AGI, is a few years away. AGI? “Pretty close at this point.” His reasoning: systems can already do new research independently, build complex software, act as doctors, lawyers, and scientists. We’ve just gotten used to it.

The faster-than-expected takeoff is also what’s stressing safety researchers. On Anthropic’s safety chief resignation: “The part of it I agree with is the inside view at the companies of looking at what’s going to happen, like the world is not prepared. We’re going to have extremely capable models soon. It’s going to be a faster takeoff than I originally thought.”

The Rapid-Fire Revelations

The interview’s game format produces several notable moments:

On Google: Altman admires two things. First, Demis Hassabis and team started working on AI “before anyone else in the modern era” and inspired OpenAI. Second, Google’s ability to catch up on model quality after being “pretty far behind” is impressive.

If he couldn’t use ChatGPT: he’d pick Gemini for looking things up, “for other things, I’d pick other models.”

On the nonprofit-to-profit shift: the reason is straightforward. Democratizing AI and staying at the research frontier both require enormous capital.

Research vs. product company: “Research first. The coolest thing about AI is that the great majority of making a good product is doing good research.”

The criticism that hurts: “He’s just doing it for the power. He doesn’t really care about trying to do something helpful.” On not taking equity in OpenAI: “That was truly one of the dumbest things I’ve ever done.”

On Elon Musk: after a pause, he credits Musk with being “extremely good at physical engineering and also extremely good at getting people to perform incredibly well at their jobs.” Who’s more likely, TSMC losing its monopoly or Musk and Altman becoming friends? “Musk and I becoming friends again is less likely.”

Biggest corporate AI mistake: A company planning to spend 2026 strategizing, 2027 getting ready, and 2028 deploying. “Doing that for AI will be a catastrophic mistake.”

On Greg Brockman: Altman admits being wrong about trying to do too many things, and credits Brockman for teaching him “the importance of extremely narrow deep focus.”

Jony Ive collaboration: new AI-native hardware coming, hope to talk about it late this year. The goal is a device that “participates in your life and is not in the way of it.”

Afterthoughts

This interview works because the rapid-fire format strips away the usual PR buffer. A few things worth sitting with:

  • The FrontierProof result (7/10 unsolved math problems) is a more concrete AGI signal than any benchmark score. If AI can generate new mathematical knowledge, the “just pattern matching” dismissal needs serious updating.
  • Altman’s framing of the energy debate is genuinely useful: compare training costs to the full cost of producing a human, not just a single query. It resets the conversation.
  • His admission about the faster-than-expected takeoff, combined with acknowledging that safety researchers’ anxiety is warranted, is a rare moment of alignment between accelerationist and safety perspectives.
  • The one-person startup thesis is becoming central to his worldview. If the best counterargument to “AI concentrates power” is “anyone can now build what corporations used to,” that’s a bet on distribution at the application layer even as infrastructure concentrates.

“I think I would never ask it how to be happy. I would rather ask a wise person.”

Watch original →