January 19, 2026 · Podcast · 1h 9min
The AI Opportunity That Goes Beyond Models
AI’s infrastructure layer gets all the headlines, but the a16z Apps team is betting that the real wealth creation happens one layer up. In this investor presentation, four partners lay out a framework for where AI application value concentrates, why this cycle is fundamentally different from prior platform shifts, and how to tell which startups will build durable businesses versus which will get commoditized.
The Fastest Platform Shift in History
Alex Rampell opens by placing AI in the lineage of PC, internet, cloud, and mobile. Each cycle followed the same pattern: infrastructure companies build the substrate, then application companies capture the bulk of the economic value on top. The difference this time is speed. Cloud took roughly a decade to reach $500 billion in enterprise software revenue. Mobile took about seven years. AI is on pace to compress that timeline further, with enterprise AI adoption already generating real revenue, not just pilot projects.
The NASDAQ chart from 1977 to present tells the story. Every major product cycle initially looks like a bubble from the outside, but the companies that survive become generational. The current AI wave, Rampell argues, is no different. The infrastructure players (NVIDIA, the hyperscalers) have already been rewarded. The question is which application companies will emerge as the Salesforce or Shopify of the AI era.
Three Investment Theses
The a16z Apps Fund organizes its AI investment around three distinct categories, each with different competitive dynamics and defensibility profiles.
Thesis 1: Traditional Software Going AI-Native
Every existing software category will be rebuilt with AI at its core. This is not about bolting a chatbot onto legacy software. It is about rethinking the entire workflow from first principles.
Rampell uses the analogy of Expedia versus a travel agent. When the internet arrived, Expedia did not just digitize the travel agent’s phone book. It rebuilt the entire booking experience around what the internet made possible. The same thing is happening now: AI-native companies are not adding features to old paradigms; they are reimagining what the software should do when intelligence is a commodity input.
The competitive dynamic here is nuanced. In some categories, incumbents have massive distribution advantages and will successfully integrate AI. In others, the AI-native startup has a structural edge because the old architecture simply cannot accommodate the new paradigm. The team’s job is to figure out which is which, category by category.
Thesis 2: Software Eating Labor
This is the most transformative and most controversial thesis. For the first time, software can directly replace human labor, not just make humans more productive. The distinction matters enormously for market sizing: if your software replaces a $200K/year employee, your addressable market is measured in labor costs, not software budgets.
“There are 3,000 financial institutions in America that all have a bunch of people that are picking up the phone and making calls. If you can automate 80% of those calls, that’s not a software sale. That’s a labor replacement sale.”
David Haber illustrates this with Salient, a portfolio company doing AI-powered voice calls for financial services. When a borrower is 30 days delinquent on an auto loan, traditionally a human calls them. Salient’s AI handles that call end-to-end, including navigating complex conversations about payment plans, hardship programs, and regulatory requirements. The company is not selling a SaaS seat; it is selling the output of what a human used to do, priced against the labor cost.
The pricing model follows the value: these companies charge per call, per resolution, per outcome rather than per seat. This fundamentally changes the economics of software businesses, potentially creating much larger revenue pools than traditional SaaS.
Thesis 3: The Walled Garden
The most defensible AI companies will be those that accumulate proprietary data through their workflows, creating a flywheel that competitors cannot replicate.
Anish Acharya lays out the logic: if you are using commodity models (GPT, Claude, Gemini) with commodity data, you have no moat. The moment a better model drops, your product gets commoditized. But if your application generates unique data through customer usage, and that data makes your product measurably better, you have built a walled garden.
“When we first saw a wave of AI companies, many of them were just better prompting on top of the same model, on the same data. That’s a feature, not a company.”
The test is simple: does the company get better the more it is used? If a competitor launched tomorrow with the same model, would they need years of accumulated data to match your performance? If yes, you have a moat. If no, you are a thin wrapper.
Case Study: Eve and Legal AI
Eve represents the “software eating labor” thesis in the legal industry. The company automates legal work that traditionally required junior associates billing $500-800/hour. What makes Eve interesting is not just the AI capability but the go-to-market insight.
David Haber explains that Eve has not needed an outbound sales motion at all, which is unusual for enterprise software. Law firms are approaching Eve because the economics are immediately obvious: if AI can handle document review, contract analysis, and legal research at a fraction of the cost of a first-year associate, the ROI calculation is trivial.
The defensibility comes from the feedback loop. Every legal task Eve handles generates training data specific to how real law firms work, what judges care about, how opposing counsel argues. This accumulated institutional knowledge becomes harder to replicate over time.
Case Study: Salient and Voice AI
Salient exemplifies the “walled garden” thesis applied to financial services. The company handles outbound voice calls for auto lenders, mortgage servicers, and other financial institutions, managing conversations that require navigating regulatory requirements, payment negotiations, and customer emotions.
What struck the a16z team was the data moat. Every call Salient handles generates labeled data about what works: which phrases lead to payment arrangements, which tones reduce hang-ups, which approaches satisfy compliance requirements. After processing millions of calls, Salient’s models are tuned to a degree that a new entrant using the same base LLM simply cannot match.
The company serves over 60 financial institutions, and its AI agents handle calls that would traditionally require hundreds of human agents. The unit economics are compelling: the cost per call is a fraction of a human agent, while resolution rates are comparable or better for routine interactions.
Incumbents vs. Startups: The Real Calculus
The team dedicates significant time to the question investors always ask: “Won’t the incumbent just add AI and crush the startup?”
Rampell’s framework is more nuanced than the usual startup-booster narrative. He breaks it into three scenarios:
Incumbents win when the AI capability is an incremental improvement to an existing workflow and the incumbent has massive distribution. If you are already paying for Salesforce and it adds good-enough AI, you probably will not switch to an AI-native CRM.
Startups win when the AI requires a fundamentally different architecture or user experience. If the old product was built around humans doing work and the new product eliminates that work entirely, the incumbent’s existing codebase and business model become liabilities, not assets.
It is a toss-up when the category is large enough for multiple winners and the AI capability is genuinely transformative but can be integrated into existing workflows. Here, execution and speed matter more than structural advantages.
The team spends considerable diligence on this question for every deal, mapping the competitive landscape and stress-testing whether the startup’s advantage is structural or temporary.
AI Roll-ups: A New Playbook
One emerging pattern the team highlights is “AI roll-ups,” where a company uses AI to consolidate fragmented industries that were previously too labor-intensive to scale.
The logic: many industries have thousands of small players doing similar work (accounting firms, insurance agencies, marketing agencies). Previously, rolling these up required hiring proportionally more people, limiting scalability. With AI handling the core labor, a single platform can serve the combined customer base without linearly scaling headcount.
This creates a new M&A playbook: acquire the customer relationships, then deploy AI to handle the work that the acquired company’s employees used to do. The acquirer’s value is not the technology per se, but the combination of AI capability plus the distribution and customer trust of the acquired entities.
Consumer AI: Not Dead, Just Different
Anish Acharya pushes back on the narrative that AI is purely an enterprise story. Consumer AI applications are real, but the dynamics are different from enterprise.
The key insight: consumer AI products succeed when they create new behaviors rather than just automating old ones. A chatbot that helps you do something you were already doing is incremental. An AI companion, creative tool, or decision-making aid that enables something previously impossible can be category-defining.
The retention data has been encouraging. Unlike the initial wave of consumer AI apps (which saw high churn as the novelty wore off), the current generation of products that are deeply integrated into user workflows are showing strong retention curves comparable to the best consumer apps.
Model Aggregation: Betting on the Field
On model strategy, the team has a clear perspective: the best AI application companies are model-agnostic. They build on multiple foundation models and switch between them based on cost, performance, and task requirements.
“If you’re building a company that only works on one model, you’re making a bet that that model will always be the best. History suggests that’s a bad bet.”
This has practical implications. Companies that built tight integrations with a single model provider in 2024 are now scrambling to add alternatives. The most successful portfolio companies treat models as interchangeable infrastructure, the way web companies treat cloud providers. They build their differentiation in the application layer, the workflow, the data, and the user experience, not in prompt engineering for a specific model.
Closing Notes
The a16z Apps team’s presentation crystallizes a shift in AI investing sentiment. The infrastructure gold rush is maturing; the application era is beginning. A few things worth sitting with:
- The “software eating labor” framing changes market sizing from billions to trillions. If AI companies price against labor costs rather than software budgets, the TAM for enterprise AI is an order of magnitude larger than traditional SaaS.
- Defensibility in AI applications comes from data flywheels, not model access. Every company has access to the same foundation models. The winners will be those whose products generate proprietary data that compounds over time.
- The incumbent vs. startup question has no universal answer. It depends entirely on whether AI is an incremental improvement or a paradigm shift for a given category. Investors who default to either “incumbents always win” or “startups always win” will get it wrong roughly half the time.
- The speed of this cycle means errors of omission are more costly than errors of commission. Missing the next Salesforce-scale company because you were cautious is worse than making a few bad bets.