Skip to content
← Back to Home

January 14, 2026 · Podcast · 51min

Why AI Will Dwarf Every Tech Revolution Before It

#Future of Work#AI Agents#Manufacturing#Robotics#Enterprise AI

The enterprise is splitting in half

The most revealing number in this conversation doesn’t come from a startup founder or a VC pitch deck. It comes from Bob Sternfels, McKinsey’s Global Managing Partner, describing what’s happening inside his own firm: they’re simultaneously growing client-facing headcount by 25% while shrinking non-client-facing roles by 25%, with the smaller group producing 10% more output. This has never happened in McKinsey’s history. Growth was always synonymous with total headcount growth. Now the equation has split.

Jason Calacanis, hosting a live All-In Podcast panel at CES 2026 with Sternfels and General Catalyst CEO Hemant Taneja, frames this as the defining transformation of our lifetimes. Everything from the PC revolution to cloud computing to mobile will be dwarfed by AI’s impact. But the conversation quickly moves past the hype into the uncomfortable operational reality that enterprises are now navigating.

McKinsey’s 25-squared: a case study in AI bifurcation

Sternfels calls it “25 squared.” On the client-facing side, McKinsey is hiring at an unprecedented 25% rate because the work is changing. Consultants are moving up the stack, tackling more complex problems that AI can’t handle. On the back-office side, AI has already saved 1.5 million hours in search and synthesis. Agents have generated 2.5 million charts in six months. Sternfels wants to get rid of charts entirely.

The firm now has 40,000 human employees and 25,000 personalized AI agents. They expect to reach parity before the end of 2026. Each agent handles a “full 360-degree trusted job function.” Where it works best: structured problem solving, search and synthesis, communication. Where it doesn’t: anywhere requiring human judgment about values, societal norms, or setting the right parameters for what the models should optimize.

“Our model has always been synonymous that growth only occurs with total headcount growth. Now it’s actually splitting.”

This is probably the clearest real-world demonstration of how AI reshapes large organizations: not a simple story of replacement, but a simultaneous expansion at the top and compression at the bottom.

The vanishing entry-level pipeline

Taneja describes the startup funding shift: teams that used to need $3 million and 18 months to hire 30 people and show a first product now accomplish the same with a fraction of the resources. But the downstream effect on employment is jarring. Jason paints the picture: ten years ago, every graduate from a decent school juggled offers from Uber, Coinbase, and Google at $150K. Now they send out 100-200 resumes and get nothing.

Sternfels shares a startling data point from McKinsey’s Global Institute: the return on investment in employee skills has shrunk by roughly half over the last 30 years. It used to be about seven years; now it’s 3.6 years. The half-life of skills is collapsing.

But the more dangerous dynamic, both panelists agree, is what Sternfels calls “taking the bottom four rungs off the ladder.” Companies are eliminating entry-level positions to save money, but this destroys the pipeline to leadership. He directly challenges CEOs: “What’s the pathway to your job in the org of the future? You had a pathway. It’s not going to be the same pathway.”

“It’s literally like taking the bottom four rungs off the ladder to save money today. And then everybody’s jumping up trying to get in the organization.”

Three irreplaceable human skills

Sternfels identifies three capabilities that AI models cannot replicate, based on McKinsey’s research with large enterprises:

Aspiration setting. Do you go to low earth orbit, the moon, or Mars? Setting the right ambition level is uniquely human. Models can optimize toward a goal but cannot decide which goal matters.

Judgment. There’s no right and wrong in these models. Someone has to set the parameters, the architecture, based on firm values and societal norms. The eval problem is real, but it’s fundamentally a human responsibility.

True creativity. Models are inference machines predicting the next most likely step. Orthogonal thinking, the kind that creates entirely new categories, remains human territory.

The implication is counterintuitive: if these three skills matter most, then where you went to school matters far less. McKinsey is now exploring whether to evaluate candidates by their GitHub profile rather than their diploma, looking for raw intrinsics rather than credentials.

Education’s 700-year-old architecture is breaking

Taneja delivers the sharpest critique of education: the entire system is built on a model designed 700 years ago around high fixed costs (libraries, professors) that takes students offline for a finite period and then releases them into the workforce. That made sense when skills lasted decades. It doesn’t make sense when the half-life of skills is under four years.

His proposal: shift from four-year college to lifelong college. Your relationship with learning becomes perpetual skilling and reskilling. Some innovative college presidents are already exploring this, he notes, which also happens to be a better business model (lifetime value of a perpetual student versus a four-year customer).

Jason’s advice to young people is blunter: “There’s nobody coming for you. There’s no training program. You have to make that for yourself.” His practical suggestion is to skip the front door entirely. Email the CEO, redesign their landing page, show spec work. His reasoning is brutal but honest:

“Hiring somebody and training them is going to take longer than building an agent. I can build an agent. Young people coming into the workforce I have to train are annoying.”

The skill that actually matters now, both agree, is resilience. The educational system doesn’t build it. Taneja frames the shift as moving from learning to solve problems (which AI now does) to learning to ask the right questions, a return to Socratic dialogue.

Physical AI: self-driving now, robotics next

Jason dubs CES 2026 as “self-driving CES” and predicts CES 2027 will be “robotics CES.” The self-driving race is global: Waymo leads, but Nuro, Lucid, Tesla’s Robotaxi, and Chinese players like BYD, Alibaba, and Pony AI are all competing.

Taneja identifies the structural tension in automotive: the U.S. has the technology innovation (self-driving), but China has the manufacturing cost advantage. European automakers are “very dejected” because they don’t know how to compete with Chinese industry on either front. The only path for the U.S. to win globally is to use AI to solve the manufacturing cost problem itself, so that innovation and production can both happen domestically.

On robotics, Sternfels brings hard numbers: Korea leads in robots per worker at roughly 1:10. Germany and China are tied for second. The U.S. is a distant third. A CEO of a large U.S. contract manufacturer has 50,000 open manufacturing jobs she can’t fill, and demographics aren’t improving. Robotics isn’t optional; it’s the only way to build resilient supply chains at competitive cost points.

But Taneja tempers the enthusiasm with a structural observation: unlike LLMs, which could be deployed through ChatGPT’s cloud API and go viral overnight, robotics models lack equivalent hardware distribution infrastructure. There’s no API for physical deployment. Robotics will be slower than people think.

Tesla’s Optimus: “Nobody will remember they ever made a car”

Jason visited Tesla’s Optimus lab with Elon Musk two Sundays before the panel. A large number of engineers were working on a Sunday at 10 a.m. He saw Optimus 3 and makes an extraordinary claim: “Nobody will remember that Tesla ever made a car. They will only remember the Optimus.” Musk’s stated goal is to produce a billion units. Jason predicts a one-to-one ratio of humans to Optimus robots.

“What LLMs are going to enable those products to do is understand the world and then do things in the world that we don’t want to do.”

The tech time capsule

The final segment is a nostalgic tour through CES innovations: the brick cell phone ($4/minute, 30-minute battery), Google Glass (Jason tells a story about telling Larry Page to take them off on a dance floor), BlackBerry (Taneja was told “you were judged in this meeting” for having one in Silicon Valley in 2011), Palm Pilot, Theranos’s miniature blood testing device, and the Sony Discman.

Two observations land with genuine insight. Taneja compares today’s AR glasses to Google Glass: the form factor is better, but the utility still isn’t there. We might be in the same transition phase. Sternfels compares today’s health wearables (Eight Sleep, Oura, Whoop, blood work services) to the Discman: transition technologies that will converge into something fundamentally better. Taneja offers a sharper analogy: LLM hallucinations are like CD skipping. The intelligence is there but unreliable, just as the music was there but kept skipping. The question is whether that changes fundamentally.

Some Thoughts

This panel works because the guests bring operational receipts, not predictions. Sternfels isn’t theorizing about AI replacing jobs; he’s describing the math inside a 40,000-person firm that is simultaneously hiring and firing at equal rates. Taneja isn’t speculating about startups needing fewer people; he’s watching his portfolio companies build agents instead of onboarding junior hires.

  • The “25 squared” framework is the most honest articulation of enterprise AI’s impact: not a clean replacement story, but a bifurcation where the top expands and the bottom compresses
  • The skills half-life data (7 years to 3.6 years) makes the case for lifelong learning more convincingly than any policy paper
  • The ladder problem (eliminating entry-level positions destroys the pipeline to leadership) is the question almost no one in the AI conversation is asking
  • Taneja’s point about robotics lacking an API-equivalent distribution mechanism explains why the physical AI timeline is longer than the software AI timeline, even with better models
  • The Discman-to-hallucinations analogy is unexpectedly sharp: both represent transition technologies where the core value proposition is present but unreliable
Watch original →