February 5, 2026 · Speech · 37min
Jensen Huang on the $85 Trillion Reindustrialization and Why Every Designer Will Manage a Team of AIs
The largest industrial infrastructure buildout in human history is underway. Not a metaphor, not a forecast qualifier: Jensen Huang puts a number on it. $85 trillion over the next decade. And the tools that design, simulate, and validate all of it are about to be completely reinvented.
A Quarter Century, Three Computing Revolutions
Jensen Huang and Pascal Daloz, CEO of Dassault Systèmes, go back over 25 years. Their collaboration began during the Unix-to-Windows workstation migration, built on OpenGL and a technology called CGFX, the precursor to CUDA. OpenGL became RTX. CGFX became CUDA. Now they’re announcing the largest partnership their companies have ever had.
The deal integrates NVIDIA’s full computing stack into Dassault’s product suite: CUDA X acceleration libraries, NVIDIA AI for physical and agentic AI, and Omniverse digital twin technologies. All of Dassault’s product lines, CATIA, SIMULIA, BIOVIA, DELMIA, ENOVIA, will be built on top of NVIDIA’s platform.
The promise: 100x to 1,000x to eventually 1,000,000x the computational scale of what engineers could do before. What used to be offline simulations or pre-rendered visualizations will become real-time operations inside virtual twins.
The Three Factory Stack
Huang frames the current AI buildout as three interlinked industries scaling simultaneously:
Chip factories produce the semiconductors. Packaging factories assemble them. The number of these facilities is growing rapidly as every country races to build domestic capacity.
Computer factories take those chips and assemble supercomputers. What comes out is not a server rack but a purpose-built AI machine.
AI factories house those supercomputers and run the intelligence production lines. A single gigawatt AI factory costs approximately $50 billion. Tens of gigawatts are being built worldwide right now.
NVIDIA itself is the first customer of the Dassault partnership for designing these AI factories. They use Dassault’s model-based systems engineering to design, plan, and simulate every aspect of a data center before breaking ground, down to the complete bill of materials. The virtual twin runs the network, runs the supercomputers, validates that everything fits together, all before a single physical component is placed.
“A gigawatt AI factory is about $50 billion dollars. And now we’re building tens of gigawatts around the world. It’s the largest industrial infrastructure buildout in human history.”
From Language Models to World Models
The most technically substantive part of the conversation is Huang’s distinction between language models and world models.
A language model understands syntax, vocabulary, structure, and has “taste,” knowing what makes a good paragraph. It has guardrails about what to discuss and what to avoid.
A world model is fundamentally different. Instead of taste and values, it must obey the laws of physics. It must understand causality: tip one domino, and all connected dominoes fall. It must grasp inertia, friction, gravity, contact dynamics, all the intuitive physics that engineers rely on but that no amount of language training can capture.
Huang uses a vivid analogy: dogs catch balls out of the air without solving physics equations. They watch, predict, and snatch. AI world models work the same way, learning to predict physical behavior from observation and simulation, not from first principles alone.
“Are dogs able to catch a ball out of the air? And yet they’re not doing physics simulations of balls bouncing or elastic nature of the ball. They’re just literally watching us and predicting where it’s going to go.”
NVIDIA’s approach is Physics NeMo, a physics-aware AI framework that creates models either trained by principled simulators or working alongside them. The result: grounded in the laws of physics but 10,000 times faster than traditional simulation. And if traditional simulation is already running in real time, then the AI version operates at 10,000 times greater scale.
Shift Left Everything
A recurring theme: “shift left.” Move validation, compliance, and manufacturability upstream into the design process itself, rather than treating them as downstream checkpoints.
Design for manufacturability gets integrated from day one. Engineers don’t just design shapes; they design behaviors, with crash performance, aerodynamics, and vehicle dynamics validated in real time during the design phase. Lucid Motors is already doing this, embedding crash behavior and aerodynamic simulation upstream in their vehicle programs.
Compliance by design is another shift-left target. NIAR, the National Institute for Aviation Research, deals with 10,000+ requirements for aircraft certification, a process that typically takes 3 to 5 years. With AI companions that can ingest regulations automatically and verify conformity continuously, compliance transforms from a cost center into a competitive advantage.
Software-defined factories extend the principle further. Omron doesn’t use virtual twins just for visualization; they engineer autonomy from day one. The autonomous components are designed alongside the production system, not retrofitted after it’s running. The result: factories that are more flexible, resilient, and adaptive.
Every Designer Gets a Team
Huang’s prediction about AI’s impact on the design workforce directly contradicts the common narrative:
“Whereas most people think that the number of designers therefore will be less than the past… it is exactly the opposite.”
Every designer, every SolidWorks user, will manage a team of AI companions. These companions are trained with different skills, coordinated to work with each other and with the human designer. The human becomes the manager, architect, and creator; the companions execute, explore, and optimize.
The workflow Huang describes: you work with your companions during the day, then before leaving, you assign exploration tasks. “I want you to explore this area, optimize for these parameters, give me three designs. Give me 10 designs for this other area.” When you come back, you choose from the options and fine-tune using structured 3D data.
The implication for the software industry: the number of tool users explodes from purely biological to biological plus AI-based. Every companion uses the design tools. Dassault’s user base doesn’t shrink; it multiplies.
The Biology Frontier
The conversation touches on what Huang considers one of the most impactful engineering areas of the next decade: understanding and generating biological systems.
The first challenge is learning the “language of life,” DNA, proteins, cells, and their interactions. AI enables translation between human language and the language of biology, and crucially, enables generation: new proteins for drugs, new chemicals, new materials that are stronger, lighter, more heat-resistant, longer-lasting.
The Bel Group (the company behind Babybel cheese) provides a concrete example. Their mission: produce healthier food, consume less water, and progressively replace dairy proteins with non-dairy alternatives. Previously this required hundreds of physical tests per product. Now they generate protein candidates from virtual twins powered by biological world models, achieving faster innovation with certified decisions.
Knowledge as Moat
The closing thread is about knowledge protection. Huang argues that AI companions will codify each individual’s skills, preferences, habits, and domain expertise. This captured knowledge is personal, not public, not open-sourced.
“If you look at my inbox, in a lot of ways, it has captured 33 years of my knowledge, of my expertise. It’s not available for everybody.”
The AI companion sits with you, accumulates your engineering sensibility, and becomes a personal knowledge repository. This is distinct from cloud-based AI services; it’s individual, proprietary, and increasingly valuable over time.
Some Thoughts
The $85 trillion figure is the headline, but the structural argument underneath it is more interesting. Huang is describing a world where three distinct industries (semiconductors, computer manufacturing, and AI infrastructure) are scaling in lockstep, each dependent on the others. The bottleneck at any layer constrains the whole stack.
A few observations worth sitting with:
- The “shift left” philosophy, when taken to its logical conclusion, means the design tool becomes the certification tool, the compliance tool, and the manufacturing planning tool, all at once. This compresses entire organizational functions into the design phase.
- Huang’s argument that AI will increase, not decrease, the number of tool users is self-serving (NVIDIA sells to tool makers), but structurally sound. If AI companions use the same tools as humans, every productivity gain multiplies seat count rather than reducing it.
- The distinction between language models and world models is critical for understanding where AI in engineering actually adds value. The physics grounding isn’t a nice-to-have; it’s the difference between generating plausible text about engineering and generating validated engineering artifacts.
- NVIDIA being the first customer of its own partnership with Dassault, using their joint tools to design $50 billion AI factories, is a compelling proof point. They’re eating their own cooking at the highest possible stakes.