February 5, 2026 · Podcast · 1h 52min
Amjad Masad: The AI Job Collapse Starts This Year
Amjad Masad is not optimistic in the polite, hand-wavy Silicon Valley way. He’s optimistic in the way someone is when they’ve already seen the future arrive at his company and decided the rest of the world just hasn’t caught up yet. At Replit, they replaced five employees with one. He doesn’t say this with regret. He says it as evidence.
The shape of this conversation
Tom Bilyeu, host of Impact Theory, opens with what he calls “staring nakedly at the things that are concerning.” The result is nearly two hours of philosophical combat between a techno-optimist CEO and a host who genuinely believes 98% of adults never change. They clash on free will, intelligence, government control, and whether AI will liberate or enslave, and arrive at a surprisingly convergent place: the era of the builder is coming, but only for those who choose to build.
What makes this conversation worth unpacking is the tension. Masad isn’t naive about disruption; he just draws fundamentally different conclusions from the same data everyone else is panicking about.
The coordinated fear machine
Masad identifies three distinct groups driving AI fear:
True believers who think superintelligent AI will exterminate humanity. He’s skeptical of this crowd’s reasoning: “They put such a big prize in pure intellectual horsepower. They had these charts where gorilla is here, human is here, AI is at the ceiling.” His counterargument is that consciousness creates a threshold for value that raw intelligence alone doesn’t cross.
Status-seeking communities that formed around AI safety as an identity. “It becomes somewhat of a status thing,” Masad says. The safety community, in his view, shifted from genuine concern to social positioning, with VC money, books, nonprofits, and billionaire funding creating a self-sustaining economic ecosystem.
Big companies weaponizing regulation. This is where Masad is most pointed. He describes the classic regulatory capture playbook: Google supported GDPR because Google can handle that level of complexity; a startup cannot. When OpenAI proposed AI licensing to the government, Masad saw it as pulling up the ladder. The counter-movement came from Marc Andreessen’s “techno-optimist” manifesto and the cultural shift around the Trump election. Masad’s key insight: “You have to make it high status to believe in certain things.”
After the Sam Altman firing from OpenAI’s board, the EA community’s issues (SBF fraud, cult-like behavior, psychological harm among members) became public, and AI companies began decoupling from the doom narrative.
LLMs are plateauing, but coding agents are exploding
Masad makes a claim that will annoy many in the industry: LLMs are hitting an asymptote. GPT-3 to GPT-4 was the last qualitative leap in general intelligence, and nothing comparable has happened in three years. Progress continues rapidly in coding, science, and domains with binary outcomes (correct/incorrect), because synthetic data can be generated and verified through tests. But general conversational ability has stalled.
He coins the term “Functional AGI”: systems that will look like AGI but are actually a patchwork of specialized intelligences that cannot generalize learning across domains. The problem, as he sees it, is that LLMs are so profitable that fundamental research is underinvested. An entire generation of AI scientists only knows LLMs; a new generation may be needed for genuine breakthroughs.
But in coding, the acceleration is breathtaking. Between October and December 2025, plugging in new models produced a 50% overnight improvement in Replit’s automated test pass rates. And coding agents turned out to be far more general than expected: beyond writing code, they create slide decks (automatically crawling websites and bypassing bot protection), analyze health data, automate marketing, and handle personal tasks. “Coding agents are actually general-purpose automation platforms, not just programming tools.”
A behavioral shift he finds telling: Replit users no longer look at code. The platform used to expose a full IDE for code review; now users skip it entirely because “it’s working.”
Five employees replaced by one
The headline number from this conversation is Replit’s internal transformation. They replaced teams of five with single individuals augmented by AI. But Masad’s framing matters more than the statistic.
He doesn’t describe this as layoffs. He describes it as capability amplification. The one remaining person isn’t doing the work of five in the old sense; they’re operating at a fundamentally different level because AI handles the grunt work.
The displacement pattern he describes is specific: middle-skill knowledge work is most vulnerable. Not the plumber, not the electrician, not the surgeon. The “laptop class,” borrowing Marc Andreessen’s term: consultants, analysts, junior developers, copywriters. “If your work is mostly email and documents, you should be worried.”
Masad explicitly states mass job displacement will happen in 2026, surprising Bilyeu who had estimated 3-5 years. His internal case study: an employee named Luca, a “business journalist vibe coder” with no programming background, roams the company finding inefficiencies and builds software to fix them, producing an internal HR tool better than any market solution because it’s custom-built for Replit’s specific needs.
A counter-intuitive finding: some professional engineers are worse vibe coders than non-engineers, because they instinctively want to inspect code and micromanage the agent. The 10x engineer concept now extends to all roles: the best RevOps person can be 10x their peers by building automations, dashboards, and training tools.
The builder class thesis
Masad’s core argument is that AI doesn’t kill jobs so much as it kills the monopoly on software creation. For decades, if you wanted custom software, you needed engineers. Now you need a prompt.
“We make it so that anyone can make software to improve their business, to improve their lives, for artistic expression.”
He started Replit as a project in 2009, founded the company in 2016, and didn’t achieve financial success until 2024. “Our revenue just shot up when the world headed in our direction.” The bet was always that programming would be democratized. AI just accelerated the timeline by a decade.
Instead of a shrinking economy where AI takes jobs, Masad envisions an expanding economy where millions of new micro-businesses emerge. Replit already sees micro-entrepreneurs earning millions individually on tasks that previously required venture capital and teams. “In the same way that YouTube created a whole generation of creators, AI coding is going to create a whole generation of builders.”
The UBI trap and the meaning crisis
Bilyeu raises Universal Basic Income as a potential solution to displacement. Masad pushes back hard, not on the economics but on the psychology.
The problem with UBI isn’t the money. It’s meaning. Bilyeu frames this through what he calls evolution’s “internal algorithm”: humans are programmed to demand hard work gaining skills, making progress toward an honorable goal. Without this, deep discomfort results. It’s why billionaires commit suicide: security isn’t the key; earning your own respect is.
Both give Ted Kaczynski’s “power process” theory serious treatment: technology makes life too easy, robbing humans of the natural challenge-achievement cycle, producing depression and pathological social activism. UBI has multiple traps beyond the meaning deficit: inflationary effects, incentivizing gambling and financialization rather than value creation, and inviting organized fraud.
Masad believes Japan’s hikikomori phenomenon will spread globally. Without a social safety net, the alternative is more drug addiction and homelessness.
Bilyeu proposes four future paths for humanity: New Amish (return to nature), Mars colonization (a real-life survival game), Brave New World (pure hedonism), and virtual worlds (his personal choice, since video games effectively tap evolutionary psychology). Masad’s response: this isn’t a utopia vs. dystopia choice. “We have no choice but to move forward.” Europe’s attempt at a middle path results in lost competitiveness and GDP stagnation.
His alternative to UBI: make the tools of creation so accessible that the barrier to entrepreneurship disappears. Instead of giving people money to not work, give them the power to create their own work.
The sovereign individual and government control
The conversation takes a political turn when they discuss what happens to power structures in an AI-disrupted world. Masad references “The Sovereign Individual,” a book he calls “fantastic” that predicted Bitcoin and remote work.
His framework: modern welfare states are companies that serve the employee (politicians, bureaucrats), not the customer (citizens). “They actually prefer an underclass. They prefer the rich elite. They prefer to create the chasm between them because the underclass they can rely on for votes.”
He draws a direct line to AI: “The people at the top are going to have access to the most power, most money because they can tax technology. They can legislate their own control.” When governments call for AI safety regulation, he hears the same playbook: more control disguised as protection.
But he’s not a doomer on governance either. He points to American structural advantages: free speech, interstate competition (Texas and Florida actively recruiting tech companies from California), and the self-correcting nature of democratic systems.
He adds an unexpected comparison: China’s free market system is more robust than America’s at this point. China has no welfare state. Their version of wealth distribution is seeding so many companies that competition drives margins to near zero. “There’s probably a lot of fraud, but also there’s tons of really competitive Chinese electric vehicles.” It’s a rare stance for a Silicon Valley founder to take publicly.
Masad also cites Tim Wu’s “The Master Switch”: technology oscillates between centralization and decentralization (ham radio to government regulation to early decentralized internet to social platform centralization to crypto decentralization). AI, he argues, is simultaneously centralizing and decentralizing. Governments can use it for mass surveillance, but individuals can use it for entrepreneurship, information verification, and countering that same surveillance.
Intelligence is overrated
The deepest philosophical clash comes when Bilyeu argues that unevenly distributed intelligence gives elites permanent control over the masses. Masad’s response is unexpectedly personal:
“Being smart is a core of my identity and that’s why I’m here. So it’s not like I don’t believe in intelligence. But over time, I’ve come to realize that there’s wild diminishing returns to it.”
He invokes the “midwit meme”: the people running society aren’t the 150 IQ Einsteins. They’re the 110 IQ consultants, the laptop class. Pure intelligence has never been what determines who holds power.
“I grew up with a lot of kids that were a lot more book smart than me. But one thing I always had a good talent for is knowing where the world is headed technology-wise.” He calls this a different kind of intelligence: intuition about technological direction, the ability to sense shifts before they become obvious.
This extends to his view of ordinary people. “The whole MAGA thing, the whole Trump revolution is because everyday people just realized they’re getting screwed. They just figured out there’s a deep injustice. They may not have the vocabulary to articulate it in an intellectual way, but people are smart.”
Bilyeu pushes on consciousness and free will. He references Robert Sapolsky’s “Determined,” which argues against free will all the way down to quantum collapse. Masad pushes back with Steven Pinker’s “The Blank Slate,” arguing that treating people as programmable blank slates led to the worst human tragedies of the last century. But even here they converge: regardless of whether free will is real, it doesn’t change “the physics of how I live my life.”
Builder ethics and social cost
Near the end, Masad makes a point unusual for a Silicon Valley CEO: there should be a social cost for building harmful AI applications.
“If your friend is building a sex bot, you shouldn’t be affirmed. We should have standards for entrepreneurs.”
He notes that OnlyFans, one of the most profitable companies in the world, wasn’t started in Silicon Valley, and sees this as evidence that the Valley’s social norms, however imperfect, do exert some positive pressure. “It’s incumbent on people building this technology to try to nudge the world in a better direction.” Not through regulation but through culture, social pressure, and choosing what to work on.
Some Thoughts
This is a conversation where the headline (“CEO replacing 5 with 1”) underrepresents the substance. The real value is in the framework Masad offers for thinking about AI disruption, and the honest places where he breaks from Silicon Valley orthodoxy (LLMs plateauing, China’s system being more competitive, the need for social norms around what gets built).
- Fear of AI is being amplified by three distinct groups with their own incentives, and none of them are primarily motivated by public safety.
- Job displacement is real and imminent for knowledge workers, but it’s paired with an unprecedented expansion of who can create software and start businesses. The meaningful divide won’t be between employed and unemployed, but between builders and consumers.
- UBI treats the symptom (no income) while poisoning the cure (meaning through contribution). Accessible tools beat handouts.
- “Functional AGI,” Masad’s term for AI that looks general but is actually a patchwork of specialists, is a useful framework that pours cold water on both the hype and the doom.
- Masad started his project in 2009, founded in 2016, hit financial success in 2024. His main advice: “Don’t quit. The person who’s showing up every day and trying things, that’s a superpower because most people just quit.”