January 28, 2026 · Interview · 29min
Yann LeCun: The LLM Revolution Is Over, Physical AI Is Coming
The LLM era is a stepping stone, not a destination. The next revolution won’t come from scaling language models bigger; it will come from machines that understand the physical world.
The Interview
At the AI Summit in Davos, hosted by Imagination In Action’s John Werner, Yann LeCun laid out a sweeping critique of the current AI paradigm and his vision for what comes next. Fresh from leaving Meta where he spent 12 years leading AI research, LeCun was characteristically blunt: agentic systems built on LLMs are “a recipe for disaster,” open research is being strangled by corporate secrecy, and the path to human-level intelligence requires a fundamental architectural shift. This wasn’t a farewell tour but a manifesto for his next chapter.
Why LLMs Can’t Get Us to Real Intelligence
LeCun’s core argument is architectural. LLMs predict the next token in a sequence, but they cannot predict the consequences of actions in the real world. Without that capability, genuine planning is impossible.
“How can a system possibly plan a sequence of actions if it can’t predict the consequences of its actions?”
He drew a sharp contrast with human learning: a 17-year-old can drive a car within 10 hours of practice. Autonomous driving systems have consumed millions of hours of training data and still haven’t reached Level 5. The gap isn’t about scale; it’s about the wrong architecture.
The real world, LeCun emphasized, is fundamentally different from the world of language. Sensory data is high-dimensional, continuous, and noisy. Generative architectures that work brilliantly for text simply don’t transfer. This is counterintuitive because humans experience language as the pinnacle of intelligence, but LeCun insists predicting the next word “is not that complicated.” The real challenge is modeling physical reality.
He also pushed back on the term “AGI” itself. Not because he doubts machines will surpass human intelligence, but because human intelligence isn’t general. Calling human-level AI “artificial general intelligence” is a misnomer.
AMI and the JEPA Architecture
LeCun’s new company, Advanced Machine Intelligence (pronounced “ami,” meaning “friends” in French), is the external continuation of the research project he drove at Meta’s FAIR lab. He described the working culture there fondly: he was “the manager of nobody,” and people joined the project voluntarily, bottom-up rather than top-down. “That’s the way research should take place.”
The technical blueprint centers on JEPA (Joint Embedding Predictive Architecture), a non-generative approach that makes predictions in representation space rather than pixel space. The key insight: instead of trying to generate exact future frames of video (which is intractable for real-world complexity), the system learns abstract representations that capture the essential dynamics.
They already have working prototypes. Systems trained entirely self-supervised on unlabeled video that can:
- Understand and represent video content
- Predict missing parts of a video
- Detect physically impossible events (a ball stopping mid-air or disappearing triggers high prediction error)
The ambition extends far beyond video understanding. LeCun wants to generalize JEPA to any modality and any sensor data, enabling “phenomenological models” of complex systems: industrial processes, chemical plants, turbine engines, even living cells.
The Digital Twin Problem
LeCun offered a subtle critique of the digital twin concept. The problem with high-fidelity simulation is that if you simulate a system too accurately, you can’t predict anything useful. He illustrated this with a thought experiment: you could theoretically explain everything happening in a room using quantum field theory, including everyone’s thought processes. But that level of detail is completely impractical.
“The way we can understand what’s taking place right now in this room is through psychology, maybe a little bit of science… not at the level of quantum field theory.”
This is why abstract representation matters. Intelligence requires the right level of abstraction to make useful predictions, and generative models, which try to reconstruct raw data, fundamentally miss this point.
Open AI as Infrastructure, Not Charity
LeCun framed AI openness not as idealism but as historical inevitability. His analogy: in the 1990s, internet infrastructure relied on proprietary servers from Sun Microsystems and HP running proprietary operating systems. All of that was “completely wiped out.” The entire internet now runs on Linux, with an open-source stack from low-level protocols to web applications.
“If it’s not open source, it will just not be adopted.”
But his argument went deeper than market dynamics. In a near future where every person’s “entire digital diet will be mediated by AI systems,” concentration of that power among a handful of companies on the US West Coast or in China would be catastrophic for democracy, cultural diversity, and linguistic diversity. Open-source AI is essential for the same reason press diversity is essential.
He proposed a global consortium where different regions contribute to training a shared open-source model, creating “the repository of all human knowledge.” This is especially urgent for countries that are neither the US nor China, who need access to multilingual and culturally local data that no single private company can provide.
On the current state: his former colleagues at Meta are working on a successor to Llama, but whether it will remain open is “not entirely clear.” Meanwhile, the best open-source models now come from China. LeCun called the trend toward closed research in Western labs “disastrous” for progress.
The Real Risks: Concentration, Not Extinction
When asked about AI risks, LeCun dismissed existential scenarios as “BS, if you pardon my French.” The real danger is concentration of power, specifically that a small number of companies could control the AI systems that mediate all human information.
On economic displacement, he cited economists Philippe Aghion (Nobel laureate) and Erik Brynjolfsson (Stanford), who predict AI will improve productivity by roughly 6% per year. That’s significant but not catastrophic. Mass unemployment is unlikely because the limiting factor is how fast people can learn to use new technology, a “built-in regulatory mechanism.”
On alignment, LeCun argued the entire framing is wrong when applied to LLMs. You can never guarantee an LLM’s behavior because its training data covers only a tiny subset of possible prompts. But this is a problem with the architecture, not with intelligence itself. His proposed “objective-driven AI” systems would be given specific objectives and constrained by guardrails enforced at inference time, a fundamentally different and more controllable approach.
Learn Quantum Mechanics, Not Mobile App Programming
LeCun’s advice to students was characteristically contrarian: if you’re choosing between a course in mobile app programming and quantum mechanics, take quantum mechanics, even if you’re a computer scientist.
His reasoning: technology evolves so quickly that today’s students will inevitably change careers. What endures are fundamentals, the mathematical and conceptual tools with long shelf lives. He pointed out that the underlying mathematics of machine learning largely comes from statistical physics, which is why so many physicists now work in AI. Nobody could have predicted that connection in advance.
The 2035 View
LeCun placed human-level AI within a 10-year horizon but firmly rejected the “next year” timelines of more optimistic colleagues. Progress will come through multiple conceptual breakthroughs, published in “obscure research papers that nobody is going to pay attention to until five years later when someone demonstrates how powerful they are.” That’s exactly how deep learning, transformers, and LLMs each unfolded.
His vision for 2035: AI assistants embedded in smart glasses or other wearables, constantly amplifying human intelligence and helping us make more rational decisions. The relationship between humans and superintelligent systems will mirror that of leaders with their staff.
“Politicians certainly are surrounded by staff of people who are smarter than them, right? Certainly true for professors too, actually.”
The purpose of increasing the total amount of intelligence on the planet, LeCun concluded, is “intrinsically good.”
A Few Observations
A conversation that covers enormous ground in 30 minutes, with a speaker who has the rare combination of deep technical authority and willingness to make bold, specific claims.
- The JEPA vs. generative architecture distinction is the technical heart of LeCun’s worldview. If he’s right that abstract representation (not pixel-level generation) is the key to world models, the current generative AI boom is building on the wrong foundation.
- His analogy between AI platforms and Linux is historically apt but incomplete. Linux won because it was free and good enough. Open-source AI needs to be not just available but competitive with frontier closed models, a much harder bar when training costs are in the billions.
- The “6% productivity per year” framing from Aghion and Brynjolfsson is a useful anchor in a debate dominated by either utopian or apocalyptic extremes. It suggests transformation at the pace of adoption, not the pace of capability.
- LeCun’s departure from Meta and founding of AMI is itself a data point. One of the most credentialed researchers in AI history is betting his next chapter on a paradigm that most of the industry isn’t pursuing. Whether that’s visionary or contrarian will be the defining question of the next decade.