February 8, 2026 · Speech · 43min
Jensen Huang on the AI Factory: Why Your Questions Are More Valuable Than Answers
From Pre-Recorded to Generative: The Once-in-60-Years Reset
Jensen Huang opened his fireside chat at Cisco’s AI Summit with a framing he’s been refining for years, but delivered here with unusual directness (and, by his own count, five glasses of wine): computing is being reinvented for the first time in 60 years. The shift from explicit programming to implicit programming, where you tell a computer your intent rather than writing step-by-step instructions, changes everything about how software is created, deployed, and maintained.
The conversation with Cisco CEO Chuck Robbins was loose, funny, and surprisingly candid. Huang had just flown in from Taiwan via Houston, was visibly hungry, and clearly in a mood to riff rather than present. What emerged was less a polished keynote and more an unguarded tour of how he actually thinks about AI strategy, both for NVIDIA and for the enterprises he wants to convert.
The entire computing stack is being rebuilt: processing, storage, networking, security. NVIDIA’s partnership with Cisco puts AI networking technology into Cisco’s Nexus control plane, combining raw AI performance with enterprise-grade manageability. The new computing platform (Vera Rubin) represents the next generation, but Huang kept pulling the conversation back to what sits on top of the infrastructure: applications.
The Five-Layer Cake and Why Only the Top Matters
Huang laid out a five-layer stack: energy, chips, infrastructure (hardware and software), AI models, and applications. Then he immediately dismissed the bottom four.
“Every single country, every single company, all that layer underneath is just infrastructural stuff. What you need to do is apply the technology. For God’s sakes, apply the technology.”
His punch line for the enterprise audience was characteristically blunt:
“You’re not going to lose your job to AI. You’re going to lose your job to someone who uses AI."
"Let a Thousand Flowers Bloom, Then Curate”
When asked what enterprises should do first, Huang pushed back hard against the instinct to demand ROI before experimenting.
“Too many companies I hear, they want it explicit. They want it specific. They want demonstrable ROI. Showing the value of something worth doing in the beginning is hard.”
His advice was counterintuitive for risk-averse organizations: say yes first, ask why later. He drew a parenting analogy. When your kids want to try something, you say yes and then ask how come. You don’t demand they prove it will lead to financial success before letting them try. Yet that’s exactly what most companies do with AI.
At NVIDIA, they use Anthropic, Codex, Gemini, and everything else. When a team says they want to try an AI tool, Huang’s default answer is yes. He hasn’t started curating yet; a thousand flowers are still blooming. But he’s crystal clear on what matters most: chip design, software engineering, and system engineering. That’s where NVIDIA concentrates its serious AI investment, partnering with Synopsys, Cadence, Siemens, and Dassault to infuse AI into the design tools that build NVIDIA’s next generation chips.
The sequencing matters: experiment broadly, but know where your core is. Don’t curate too early (you’ll pick the wrong arrow), but put all your wood behind one arrow when the time comes.
Apply Infinity to Your Hardest Problem
Huang introduced a mental model he called “AI sensibility”: imagine your technology is infinitely fast, has zero mass, operates at the speed of light. Then ask what you’d do differently.
The math behind it: Moore’s Law was 2x every 18 months, 10x every 5 years, 100x every 10. AI computing has advanced a million times in the last 10 years.
“A million times every 10 years. Moore’s law? Goodness gracious, that was slow. That’s like snails.”
This isn’t just motivational framing. When engineers trained foundation models on “all of the world’s data,” they weren’t being ambitious. They were applying abundance thinking. When medical researchers say they’ll tackle “all human suffering” instead of just cancer, that’s the same logic.
His challenge to the audience: if you’re not applying this sensibility to your hardest problems, you’re doing it wrong. And if that doesn’t motivate you, imagine your competitor is thinking this way. Or a startup that’s about to be founded.
Software Is Now Generative, Not Pre-Recorded
Huang made a distinction that has deep infrastructure implications. All past software was “pre-recorded”: algorithms were coded, shipped on CD-ROMs, and user interactions were fundamentally retrieval-based. You touched your phone and it went and retrieved some files, some images, and brought them to you.
Future software is generative. Every context is different, every user is different, every prompt is different. Every single instance of the software is different. This means compute demand explodes, because every interaction requires real-time generation rather than simple retrieval.
Why Software Tools Won’t Die
Huang addressed head-on the narrative that AI will replace enterprise software companies (whose stock prices have been under pressure). He called it “the most illogical thing in the world.”
His thought experiment: if you had artificial general robotics, a humanoid robot that could solve any problem, would it invent a new screwdriver or just use one? Obviously use one. The digital version is the same: an AGI would use ServiceNow, SAP, Cadence, and Synopsys rather than reinvent a calculator.
His reasoning: many problems have precise, principled algorithms. F=MA is not “kind of” MA. V=IR is not “approximately” IR, not “statistically” IR. These explicit tools encode knowledge that doesn’t need to be re-derived. AI’s breakthrough is tool use, not tool replacement.
Physical AI and the 100x TAM
The IT industry has always been in the tool business: screwdrivers and hammers. Physical AI enables the creation of “augmented labor” for the first time.
A self-driving car is a digital chauffeur. The lifetime economics of the digital chauffeur vastly exceed the value of the car itself. The IT industry is roughly a trillion dollars; the global economy is roughly a hundred trillion. For the first time, the tech industry is exposed to the entire economic pie.
But physical AI requires something current language models lack: understanding of causality. A child understands that tipping one domino will cascade through all the others, integrating concepts of contact, gravity, mass, and cause-effect.
“A large language model will have no idea.”
Huang was unusually direct in acknowledging this gap. The domino example is deceptively simple, but it encodes exactly the kind of causal reasoning that statistical pattern matching can’t replicate.
Every Company Should Become a Technology Company
Huang made three observations that double as strategic provocations:
- Disney would rather be Netflix
- Mercedes would rather be Tesla
- Walmart would rather be Amazon
The common thread: technology-first companies deal in electrons, not atoms. Electrons are abundant; atoms are limited by mass. When companies shifted from CDROMs to electrons, their value exploded by a thousand times.
The AI era democratizes this transformation. Since coding is now “just typing” and typing is “a commodity,” domain expertise becomes the scarce resource.
“Coding, as it turns out, is just typing. And typing, as it turns out, is a commodity.”
The people who understand customers and problems, not the people who can write code, hold the ultimate value.
Your Questions Are Your Most Valuable IP
One of Huang’s most striking claims came near the end. NVIDIA builds its AI systems on-premises not primarily for data security, but because the most valuable intellectual property isn’t in the answers.
“My questions are the most valuable IP to me. What I’m thinking about are my questions. The answers are a commodity. If I simply knew what to ask.”
He compared it to therapy: you don’t want your questions online. The things you’re uncertain about, the problems you’re trying to figure out, reveal your strategic priorities. Sharing those questions with cloud providers means revealing what you think is important.
His advice: don’t choose between cloud and on-prem. Use both. But build something yourself. Lift the hood, change the oil, understand the components. You might discover you’re good at it. You might discover some things should stay in a small room.
Flip the Loop: AI in Every Employee’s Workflow
Huang’s closing argument inverted a widely accepted principle.
“There was an idea that AI should always have human in the loop. It’s exactly the wrong idea. It’s backwards. Every company should have AI in the loop.”
His reasoning: companies should get better and more knowledgeable every single day. They should never go backwards, never go flat, never start from the beginning. AI in the loop captures institutional experience. Every employee will have AIs working alongside them, and those AIs will become the company’s intellectual property.
This reframes AI from a tool that needs supervision to a memory system that accumulates value.
Afterthoughts
This was Jensen Huang at his most relaxed and least rehearsed, which made it one of his more revealing appearances. A few things worth sitting with:
- The question-as-IP insight is genuinely novel in enterprise AI discourse. Most conversations focus on protecting data or model weights. Huang is saying the strategic reveal isn’t your answers but what you’re asking about. That’s a different security model entirely.
- “Say yes, then ask why” is easy advice to give and terrifying to follow. Most enterprises are structured around justification before action. Huang’s parenting frame makes it emotionally resonant but organizationally radical.
- The tool-use argument against SaaS disruption is logically tight but may be temporally wrong. AGI would use existing tools, sure. But the current generation of AI might be good enough to replace specific tools before AGI arrives. The equilibrium Huang describes may be correct in the long run while being cold comfort for companies losing revenue now.
- Physical AI’s causality gap is underappreciated. The domino example is deceptively simple. The gap between statistical pattern matching and causal understanding remains one of AI’s hardest unsolved problems, and Huang was unusually direct in acknowledging it.
- “AI in the loop” vs. “human in the loop” is a framing war worth watching. It changes the default from “AI needs oversight” to “humans need augmentation.” The institutional implications are enormous.