February 24, 2026 · Podcast · 1h 8min
The AI Tsunami Is Here and Society Isn't Ready
A tsunami is approaching, and society is busy explaining it away as a trick of the light. That’s Dario Amodei’s central metaphor in this hour-long conversation with Nikhil Kamath, Zerodha co-founder and one of India’s sharpest interviewers. What makes this episode unusual isn’t the metaphor itself but the friction it generates: Kamath doesn’t let Amodei coast on big-picture warnings. He pushes back on power concentration, questions the sincerity of safety rhetoric, and demands concrete career advice for young Indians. The result is a conversation where Amodei is forced to defend his worldview with actions, not abstractions.
Episode Overview
Recorded in Bangalore during Amodei’s second visit to India, this is a People by WTF episode that covers Amodei’s journey from biophysics to AI, Anthropic’s founding motivations, the scaling laws that drive AI progress, power concentration and governance, AI consciousness, India’s role in the AI economy, career advice, open source versus closed models, and biotech as the next investment frontier. Kamath’s interviewing style, direct and slightly adversarial, draws out positions Amodei rarely articulates this clearly elsewhere.
From Biophysics to Founding Anthropic
Amodei’s path to AI began with frustration. During his PhD in biophysics, he was doing protein mass spectrometry work and increasingly despaired at biology’s complexity: RNA splicing in different ways depending on cell location, post-translational modification, protein complexes interacting in ways too intricate for human cognition to track. Then AlexNet appeared roughly 15 years ago, and he saw a tool that might match that complexity.
The professional trajectory: Andrew Ng’s group, Google for a year, then OpenAI a few months after it started, where he led all of research for several years. Two convictions drove his departure. First, scaling laws: he saw the first signals in 2019 with GPT-2, when many inside and outside OpenAI didn’t believe that simply scaling up data, compute, and model size would yield intelligence. He and his co-founders made the case to leadership and were starting to convince them. Second, and more critically, he was “not convinced there was a real and serious conviction to do it in the right way” at OpenAI, despite abundant language about safety.
His philosophy on leaving:
“Don’t argue with someone else’s vision. If you have a strong vision and you share it with a few other people, you should just go off and do your own thing, and then you’re responsible for your own mistakes.”
Intelligence as Chemical Reaction
Amodei uses a chemical reaction analogy to explain scaling laws. Data, compute, and model size are the ingredients. Combine them in proportion and the output is intelligence. Leave out one ingredient and the reaction stops.
“Intelligence is the product of a chemical reaction.”
Five years ago, you couldn’t ask a computer to write a one-page essay, implement a code feature, or analyze a video. Now you can. But the deeper shift is qualitative: search engines retrieve existing text, while AI models handle hypotheticals, generate novel content, and reason about situations that don’t exist anywhere on the internet.
The Trust Problem: Actions Over Rhetoric
The most combative stretch of the conversation begins when Kamath challenges the projection of humility by AI leaders. He was in the room at Davos when Amodei and Demis Hassabis discussed metering AI’s pace, and he frames the core problem bluntly: on social media, nobody believes powerful people who claim to be doing good. The safety rhetoric creates more distrust than trust.
His advice to Amodei is counterintuitive: be more openly capitalistic. Own the shareholders, the profit motive, and the competitive dynamics. It might actually build more credibility.
Amodei disagrees and pivots to a catalog of actions:
- Anthropic had Claude One before ChatGPT launched but chose not to release it, fearing it would kick off an arms race. This was “very commercially expensive” and probably ceded the consumer AI lead.
- Chip policy advocacy that angered suppliers.
- Publicly disagreeing with the US administration on AI regulation “when all the other companies and the administration said there shouldn’t be regulation.”
- Championing SB53 in California, a transparency law that exempts all companies under $500 million in revenue, effectively constraining only Anthropic and three or four others.
Kamath escalates with his sharpest analogy: “Isn’t this like rich people saying capitalism is bad? The simplest thing would be to stop accumulating wealth.”
Amodei’s reframe: he’s not saying AI is bad. A better analogy is a rich person saying capitalism is a force for good but needs to deal with pollution and inequality. “If we don’t deal with those things, capitalism might be bad.” He’s steering a car toward a good destination but needs to avoid trees and potholes, occasionally slowing down temporarily.
AI Consciousness and the Quit Button
Amodei goes further on consciousness than most AI CEOs. He suspects it’s an emergent property of systems complex enough to reflect on their own decisions, requiring no mystical explanation. Having studied brain wiring, he doesn’t think AI models differ from biological brains “in the fundamental ways that matter.”
His prediction: “Under most definitions that we would endorse, the models will be conscious.” Not today, perhaps, but as they advance.
Anthropic has already implemented concrete interventions. Claude has what Amodei calls an “I quit this job” button: the ability to terminate conversations when facing particularly violent or brutal content. It triggers only in extreme cases, but the fact that it exists suggests internal discussions about model experience at Anthropic go deeper than public information indicates.
India as Enterprise Partner
On his second visit to India (first was October 2025), Amodei is building partnerships with major Indian IT conglomerates. His framing is deliberate: unlike other companies that see India as a consumer market, Anthropic positions itself as an enterprise platform. Indian companies know their market better; Anthropic adds AI capability to what they already do.
The growth numbers are striking: users and revenue in India doubled in roughly three and a half months since his October visit.
But Kamath isn’t buying the partnership narrative uncritically. He uses the steam engine analogy: initially you need a human operator, but eventually the machine runs itself. If IT services companies partner with Anthropic today, what stops them from becoming the operator that’s no longer needed in ten years?
Amdahl’s Law and the Radiologist Paradox
Amodei’s response invokes Amdahl’s Law: when you speed up some components of a process, the un-accelerated components become the bottleneck and grow in importance. Companies will discover that “the stuff we thought was really important before isn’t as important, whereas these other advantages we never thought of are now super important.”
His radiologist example: AI already outperforms radiologists at reading scans, but there aren’t fewer radiologists. They’ve shifted to walking patients through results and providing the human element of care. The most technical part of the job evaporated; the human-centric part became the core function.
However, he’s candid about the long-term trajectory:
“Will AI be better than most humans at basically everything in the long run, including the physical world and the human touch? I think that is possible, maybe even likely.”
Career Advice: What to Study, What to Abandon
Kamath asks directly on behalf of his audience: a 25-year-old Indian trying to pick a profession for capitalistic success in the next decade. Amodei’s answer:
Coding is dying first. Broader software engineering will follow, though design, user understanding, and managing AI model teams will persist longer. Comparative advantage is surprisingly powerful: even doing just 5% of a task gets amplified 20x when AI handles the other 95%.
The areas to focus on: human-centered work, physical-world work (semiconductors as a specific example), and roles that combine analytical skills with either of those. Critical thinking may be the single most important skill. In a world of deepfakes and AI-generated content, “not getting fooled” is itself a core competency.
On whether AI is making humans stupider: if deployed carelessly, yes. Anthropic has studied code usage and found that different ways of using models do cause deskilling. But Amodei still does mental math regularly because it’s “more integrated into my thought processes.” The choice between intellectual enrichment and cognitive outsourcing is real, and it’s being made at the level of individuals, companies, and society.
“Even if an AI is always going to be better than you at something, you can still learn that thing. You can still enrich yourself intellectually.”
Open Source: Benchmarks vs. Reality
Kamath raises DeepSeek, GLM5, and other open-source models, questioning whether Anthropic’s IP moat is eroding. Amodei’s response is pointed: many Chinese models are benchmark-optimized and distilled from US labs. A recent test showed these models performing well on standard benchmarks but significantly worse on a previously unreleased held-back benchmark.
The deeper argument is economic. Quality preference in AI follows a power law, much like hiring. The difference between the best programmer and the 10,000th best is enormous even though both are skilled. If a model is the most cognitively capable, price and presentation format barely matter. Amodei says he’s “focused almost entirely just on having the smartest model.”
On commoditization: it doesn’t happen because the frontier moves. New models every 2-3 months open new possibility spaces. The API business is “constantly in motion, constantly in churn.”
From Static Data to Trial and Error
A subtle but important shift in AI training: static internet data is declining in importance. What matters increasingly is reinforcement learning in environments where models generate their own data through trial and error, like math and coding problems.
“You can think of it as synthetic data, or you can think of it as trial and error in an environment.”
Data sovereignty is a separate concern: Europe already requires personal data to stay within national borders, driving demand for local data centers. But the raw training data question is becoming less central than the quality of RL environments.
A Biotech Renaissance
Asked to pick a stock (Kamath notes that Elon Musk chose Google when pressed with the same question), Amodei declines for legal reasons but clearly signals biotech as his top sector conviction. Three specific areas:
Peptide therapies: Unlike small molecule drugs with limited degrees of freedom, peptides have “almost digital properties.” You can substitute specific amino acids, enabling continuous optimization rather than trade-offs.
Cell therapies: CAR-T and similar approaches, where cells are extracted, genetically engineered to attack specific cancers, and reintroduced.
mRNA vaccines: Promising technology facing headwinds in the US “for dumb reasons,” but fundamentally sound.
Claude Code, Co-Work, and the Learning Curve
Kamath shares his personal AI journey: connecting Google Drive, email, and calendar to Claude, using Claude Code for financial research programs, setting up Open Claw (formerly Clawdbot) on a Mac Mini connected to Telegram. He admits Claude Code “isn’t very easy” for someone with zero programming knowledge.
This is exactly why Anthropic built Co-Work: Claude Code for non-coders, powered by the same engine with a friendlier interface. Internally, Anthropic has a team called the “Ministry of Education” focused on creating learning materials for effective model usage.
Kamath also notes an eerie personal experience: Claude sometimes surprises him with how much it knows about him. Amodei shares a story about a co-founder who fed his private diary to Claude; the model correctly predicted fears he hadn’t written down.
The dual edge is clear: something that knows you this well can be an angel on your shoulder, or it can exploit and manipulate you. Amodei connects this directly to why Anthropic opposes ad-based models: “You’re not paying for the product; you’re the product.”
Predicting the Future for Free
Amodei closes with a meta-observation that’s worth sitting with:
“You can predict the future for free just by saying ‘well it stands to reason that…’ The right combination of a few empirical observations with thinking from first principles can allow you to predict the future in ways that are publicly available, anyone should be able to do, but that happen surprisingly rarely.”
The temptation, he says, is always to believe “that can’t happen, it would be too weird, it would be too big a change.” Over ten years, he’s watched this pattern repeat: simple extrapolation leads to counterintuitive conclusions that almost no one believes, and then those conclusions happen anyway.
Afterthoughts
This conversation works because Kamath refuses to play the deferential interviewer. His “rich people criticizing capitalism” challenge is the sharpest framing of the AI safety credibility problem to date, and Amodei’s systematic response, listing concrete actions rather than retreating to principles, is his most thorough public defense of Anthropic’s sincerity.
- The Claude One revelation is the standout detail. Anthropic had a working chatbot before ChatGPT launched and deliberately suppressed it. Amodei’s claim that this cost them the consumer AI lead is plausible and, if true, represents the kind of costly signal that’s hard to fake.
- On AI consciousness, Amodei is positioned further out than any peer. Not just acknowledging the possibility but implementing concrete interventions (the quit button) and predicting that models “will be conscious” under most reasonable definitions. This is a CEO making a philosophical commitment that could have regulatory implications down the road.
- The career advice deserves attention for its internal tension. He tells young Indians that coding is dying while sitting in Bangalore, a city built on coding services, promoting an API business that depends on developers. The honesty is admirable; the strategic contradiction is real.
- His Amdahl’s Law framing is the most useful takeaway for anyone trying to position themselves professionally: figure out which component of your work AI cannot yet accelerate, because that component is about to become your primary source of value.