February 8, 2026 · Speech · 24min
Sam Altman on the Biggest Capability Overhang in AI History
The gap between what AI can do and what people are actually using it for has never been wider. That’s Sam Altman’s core message at the Cisco AI Summit, and it’s a more alarming statement than it sounds.
The Conversation
Jeetu Patel, Cisco’s Chief Product Officer, opens with a reveal: Cisco was the first design partner for OpenAI’s Codex, and within weeks, 100% of the code in their AI Defense product will be written by Codex. Altman responds by calling Codex his “biggest update on AI in a while,” the first product since ChatGPT that gave him a genuine ChatGPT moment. What follows is a 24-minute fireside chat that covers the shape of AI’s near future, from full AI companies to Von Neumann probes, with a surprising amount of honesty about what’s not working.
The Codex Moment and Full AI Companies
Altman is unusually specific about why Codex matters. It’s not just a coding tool. The models have crossed a threshold, and the interface has finally caught up. When Patel asks about the upper limit, Altman goes straight to the end state:
“The upper limit I think is like full AI companies. The idea that a coding model can create a full complex piece of software but also interact with the rest of the real world to build a company around it is a very big deal.”
This isn’t a five-year prediction. He’s describing a trajectory he can already see from the current capability curve.
OpenClaw and the Future of Computer Use
When Patel raises the Moldbook/OpenClaw phenomenon, Altman draws a sharp distinction. Moldbook might be a fad. OpenClaw is not.
The core insight: code alone is powerful, but code plus generalized computer use is transformative. Altman admits he initially refused to give Codex full control of his computer. That lasted about two hours. He now uses two laptops because he was “persuaded by people that I really shouldn’t have it running like that” on his primary machine, but couldn’t stop using it.
The vision is a Codex-style workflow expanded to all knowledge work: using your computer, browsing the web, editing documents. A future where agents operate in shared spaces, interacting on behalf of people, forming what Altman describes as “a totally new kind of social network where everybody makes an agent or many agents.”
The Three Non-Obvious Constraints
Patel pushes on what’s actually holding AI back beyond the obvious (energy, compute, hardware). Altman identifies three:
Security and data access have no good paradigm yet. The current model of permissions, access control, and data governance wasn’t designed for AI agents that need broad access to be useful. Nobody has solved this.
Software needs to be rewritten for dual human-AI use. Altman offers a telling example: he wants an agent to handle Slack for him, but when his agent reads his threads, it marks everything as read and breaks his workflow. Software assumes a single user type. We may need different user accounts, different interfaces, or entirely rebuilt applications that work for both humans and AI.
Hardware and legal systems don’t support always-on AI. The most powerful use case is ambient AI that monitors your meetings, watches your screen, and acts on your behalf. But current hardware isn’t built for this, permissioning systems can’t handle it, and the legal framework for recording a meeting, learning from it, and deleting the recording doesn’t exist.
The Biggest Capability Overhang Ever
Altman’s most striking claim: the gap between what AI can do and what’s being deployed is now bigger than it was right before ChatGPT launched. And it’s growing.
“I used to say like a few months ago maybe I would have said it feels bigger than any time except right when ChatGPT launched. It now feels even bigger than that even though people are using AI for a lot of stuff.”
He’s genuinely surprised by how slow absorption has been. In retrospect he calls himself “naive” for not anticipating it, but the disconnect is real. AI can make scientific discoveries. AI can write full software. AI can do generalized knowledge work. And yet most organizations are still figuring out basic adoption.
His advice to enterprises is blunt: companies that can’t adopt AI co-workers quickly “will be at a huge disadvantage.” This requires accepting risk and organizational change, not just purchasing tools.
Teammate, Not Tool
One of the conversation’s best moments comes from an anecdote about Cisco’s own engineers. For the first two or three months of using Codex, Cisco’s team thought of it as a tool. Then one of their engineers had a realization:
“You folks are thinking about this wrong because you have to think about it like a teammate rather than a tool.”
Altman says Codex is the first AI product that genuinely feels like interacting with a collaborator, not a tool. The lesson extends beyond coding: even with amazing technology, how you package it and how users interact with it determines everything.
Infrastructure: The Demand Curve, Not a Number
On the infrastructure question, Altman frames AI demand like electricity demand: you can’t talk about total market demand as a single number. Demand depends on price and quality levels. Models will get cheaper and more efficient. But every efficiency gain drives more usage.
He’s comfortable with the $5 trillion infrastructure buildout number but hedges: there may be temporary supply gluts along the way. Over decades, the world will need far more tokens, even as each token becomes more efficient. The vision includes devices running powerful models locally off a battery, alongside massive cloud infrastructure.
The Open Source Gap
Patel asks about US open source AI versus China. Altman’s answer is notably candid: “I do worry about that.” When asked what’s stopping OpenAI from doing more, he says simply: “Focus and time.”
His nuance: frontier models matter most and will be accessed via APIs. But open source matters too, especially for a future where personal AI monitors your entire life. For that use case, locally running models under user control aren’t just nice to have:
“I at least would really like that running on inference I control.”
Business Model Evolution
Altman outlines OpenAI’s current and future revenue streams:
- Subscriptions: “many many tens of millions” of paying consumers, a result that surprised even him
- Codex: growing fast, users willing to pay significantly more than ChatGPT subscriptions
- Enterprise: demand for an “AI cloud subscription” combining security, context linking, multi-agent support, and platform capabilities
- Advertising: planned for consumer products, but “we’ll have to be very careful”
- Scientific discovery partnerships: the most speculative but potentially largest. If OpenAI can spend billions on inference to cure a disease, they’d explore royalty-based partnerships with pharmaceutical companies, positioning themselves as investors in discovery rather than just compute providers
The 10x Prediction
Patel pushes for a number on 2026 improvement. Altman is cautious about metrics but commits to a gut feeling:
“I would bet it subjectively feels something like 10x the end of this year.”
And on the further horizon, his imagination maxes out at humanoid robots building data centers, mining materials, constructing power plants, and Von Neumann probes launching. “Beyond that, I got no idea.”
Some Thoughts
This is a compact conversation, but Altman is unusually unguarded in it. A few things worth sitting with:
- The capability overhang claim is genuinely significant. If the gap is larger than pre-ChatGPT even after two years of massive adoption, it means the technology is outrunning institutions at an accelerating rate, not a stable one.
- The “teammate not tool” insight, coming from an engineer on the ground rather than from leadership, is the kind of bottom-up signal that tends to be predictive.
- Altman’s two-laptop setup is a perfect microcosm of the trust problem with AI agents. He knows full computer access is transformative. He also can’t responsibly do it on his primary machine. This tension will define the next phase of AI adoption.
- The most honest moment: when asked if absorption is slower than expected, he doesn’t spin it. “Yes. But I think I was just naive.”