Skip to content
← Back to Home

February 5, 2026 · Podcast · 2h 49min

Elon Musk: In 36 Months, the Cheapest Place to Put AI Will Be Space

#Space Data Centers#Optimus#TeraFab#xAI#DOGE

The bottleneck is always physical. In a nearly three-hour conversation with Dwarkesh Patel and John Coogan, Elon Musk returns to this idea again and again: the future of AI won’t be decided by algorithmic breakthroughs alone, but by who can scale energy, chips, and manufacturing fastest. The conversation ranges from orbital data centers to humanoid robots to government fraud, but the throughline is relentlessly material. Musk is building toward a world where xAI produces its own chips, launches its own compute into orbit, and manufactures its own robots at a scale that makes current industrial output look quaint.

The Case for Space-Based AI

The argument begins with a simple observation: outside of China, global electricity output is essentially flat. Chip production is growing exponentially. The curves will cross. By the end of 2026, Musk predicts, chip output will exceed the ability to power those chips on Earth.

His solution is characteristically extreme: put the data centers in space. Solar energy is abundant and uninterrupted in orbit. One terawatt of solar capacity, with a 25% capacity factor, means you need about four terawatts of panels, but there’s no weather, no nighttime, no permitting. The cost objection, that GPUs are harder to service in space, is real but secondary. Only 10-15% of a data center’s total cost of ownership is energy; most of it is the GPUs themselves.

“The availability of energy is the issue. The output of chips is growing pretty much exponentially, but the output of electricity is flat. So how are you going to turn them chips on?”

The latency question is addressable. Low Earth orbit adds about 5 milliseconds round trip, acceptable for inference workloads. For training, the clusters can communicate internally without needing ground connectivity. Musk estimates the total mass per GPU in orbit at around 5 kilograms (including solar panels, radiators, and structure), which at Starship’s projected cost of $200-300 per kilogram to orbit, makes the economics competitive once Earth-based power becomes the true bottleneck.

Neural nets are naturally resilient to radiation-induced bit flips. A few random flips in a multi-trillion parameter model are meaningless. The chips need to run hotter (increasing operating temperature by 20% in degrees Kelvin cuts radiator mass in half), but the engineering is straightforward. Dojo 3, Tesla’s custom chip, is being designed with space deployment in mind.

TeraFab: Millions of Wafers Per Month

Perhaps the most ambitious reveal: Musk intends to build TeraFab, a semiconductor fabrication complex producing millions of wafers per month by 2030. The facility would handle logic, memory, and advanced packaging, vertically integrating the entire chip supply chain.

The math: 100 gigawatts of space-based compute requires roughly 100 million full-reticle chips, each running at about one kilowatt sustained. Current leading-edge chips yield only dozens per wafer. The scale is staggering.

Musk has told TSMC, Samsung, and Micron directly: build more fabs faster, and xAI will guarantee to buy the output. But the incumbent fab operators carry decades of boom-bust scar tissue. They’ve seen ten cycles of euphoria followed by near-bankruptcy, and they’re dispositionally reluctant to bet on exponential demand curves.

“I don’t know. We could just flounder in failure, to be clear. Success is not guaranteed.”

The approach mirrors SpaceX’s early days: build a small fab first, make mistakes at a small scale, then scale up. The construction will be visible in real time on X.

Optimus and the Manufacturing Problem

Humanoid robotics is entering its “getting the cost of an automobile down” phase. Musk’s framing: in about five years, Optimus should be a $20,000-25,000 consumer product. The current bottleneck is actuators, the joints and motors that give the robot its physical capabilities.

American manufacturing has a steel problem. The US produces about 80 million tons of steel annually; China produces over a billion. For robotics manufacturing at scale, you need pressed steel parts, and the entire industrial ecosystem for that has migrated to Asia. Musk wants to bring it back, but acknowledges this requires rebuilding capabilities that have atrophied over decades.

The Optimus design philosophy borrows from Tesla’s approach to self-driving: train on vast quantities of human behavior data. Instead of driving a car, the AI drives a humanoid body. Musk hints at a training methodology similar to Tesla’s but declines to elaborate on specifics.

The labor economics are striking. Musk estimates there could be 10 billion humanoid robots within roughly a decade, outnumbering humans. At $20,000-25,000 per unit with a useful life of 15-20 years, the effective labor cost approaches zero, a point where GDP-per-capita as a concept breaks down entirely.

xAI’s Theory of Victory

Musk’s competitive thesis for xAI is hardware, not algorithms. He argues that AI research ideas diffuse across labs within roughly six months; there’s rarely more than a brief window of algorithmic advantage. The sustainable moat is the ability to scale physical infrastructure faster than competitors.

xAI’s advantage, in Musk’s telling, is that it can leverage SpaceX for orbital deployment, Tesla for manufacturing expertise, and the broader industrial stack no other AI lab possesses. Whichever company can scale hardware the fastest will be the leader, and he believes xAI is best positioned.

On Grok specifically, Musk wants it to be maximally truth-seeking, even when truth is uncomfortable. He draws a distinction between an AI that filters reality for palatability and an AI that provides honest, unbiased analysis. He envisions Grok having a “moral constitution” that prioritizes truth over comfort.

China’s Structural Advantages

Musk is remarkably candid about China’s position. China produces 35-40% of global manufacturing output, more than the US, Japan, and Germany combined. Chinese factories routinely operate 24/7 with minimal friction; American manufacturing culture has become risk-averse and process-heavy.

The semiconductor supply chain concentration in Taiwan represents a serious geopolitical risk. Musk’s TeraFab ambition is partly a hedge against a Taiwan contingency that would be catastrophic for the entire industry.

On Chinese AI labs: Musk acknowledges they’re strong and getting stronger. Export controls on chips have been only partially effective; China’s response has been to accelerate domestic capability.

SpaceX’s Engineering DNA

The conversation reveals how deeply SpaceX’s operational philosophy informs everything Musk does. The Raptor engine development story is instructive: SpaceX went through multiple complete redesigns, each time ruthlessly simplifying. The result is an engine with dramatically fewer parts and higher performance than conventional rocket engines.

The carbon fiber to stainless steel switch is a case study in first-principles thinking. At cryogenic temperatures (liquid oxygen and methane), steel’s strength-to-weight ratio approaches carbon fiber, but costs roughly 1/50th as much. Musk describes discovering this during a late-night search through materials science references.

His five-step engineering process, applied across all ventures:

  1. Make the requirements less dumb (question every assumption)
  2. Delete the part or process (the most common error is optimizing something that shouldn’t exist)
  3. Simplify or optimize
  4. Accelerate cycle time
  5. Automate (only after the first four)

Most organizations start at step 5 and work backwards, automating processes that shouldn’t exist in the first place.

DOGE and Government Payment Systems

The DOGE section exposes the state of federal payment infrastructure. The main treasury computer, called PM (Payment Accounts Master), processes 5 trillion payments per year. Musk’s team discovered that appropriation codes, which link payments to congressional authorizations, were optional. The comment field explaining what a payment was for was routinely left blank.

“You have to recalibrate how dumb things are.”

His analogy to PayPal’s fraud management: at PayPal, with high competence and high caring, they managed to drive fraud down to about 1% of payment volume, and that was extremely difficult. The federal government has lower competence, lower caring, and vastly larger scale. The implied fraud rate is substantial.

Musk estimates that simply making appropriation codes mandatory and requiring a non-empty comment field could save $100-200 billion annually, before touching any policy questions. The Department of Defense cannot pass an audit because the information to verify payments literally doesn’t exist in the system.

On government as an entity: Musk frames it as “the biggest corporation with a monopoly on violence,” and finds it a strange dichotomy that people distrust corporations but trust government, when corporations demonstrably have better accountability mechanisms.

The Government-AI Risk

Asked about the risk of governments using AI and robotics to suppress populations, Musk identifies this as possibly the biggest danger from AI, more serious than corporate misuse. His answer is structural: limited government with constitutional checks between branches is the primary safeguard, not corporate policies.

The tension is unresolved. As the builder of increasingly powerful AI and robotics systems, Musk holds growing leverage over what governments can and cannot do with technology. He acknowledges this but defers to constitutional frameworks rather than committing to specific refusal policies for Optimus or Grok.

“It’s better to err on the side of optimism and be wrong than on the side of pessimism and be right, for quality of life.”

A Few Observations

This conversation is worth attention less for any single revelation than for the coherence of Musk’s industrial vision. The threads connect: space-based compute requires cheap launch (SpaceX), which requires mass manufacturing (TeraFab), which requires humanoid labor (Optimus), which requires massive training compute (xAI), which requires power that Earth may not provide (back to space). It’s a closed loop of ambition.

  • The most contrarian claim: within 36 months, space will be the cheapest place for AI compute. Not because space is cheap, but because Earth will run out of accessible power.
  • The most revealing admission: success is not guaranteed on TeraFab. Building a competitive semiconductor fab from scratch is perhaps the hardest industrial challenge Musk has attempted.
  • The most underappreciated point: AI research ideas diffuse within six months across labs. The sustainable advantage is entirely in physical infrastructure, not algorithms. This reframes the AI race as fundamentally an industrial competition.
  • The limiting factor framework ties everything together: in the 1-year horizon it’s electricity, in the 3-4 year horizon it’s chips, and beyond that it’s total industrial capacity to manufacture and deploy AI systems. Musk is building companies to address all three simultaneously.
Watch original →