February 13, 2026 · Interview · 25min
Is OpenAI Moving Too Fast? Sam Altman's Risky Strategy
The CEO of the most important AI company in the world keeps a stick of uranium in his office, doesn’t own a stake in his own company, and plans to eventually hand management over to an AI. Sam Altman’s contradictions are what make him fascinating, and possibly dangerous.
The Profile Behind the Profile
Forbes deputy editor Giacomo Tognini and senior writer Richard Nieva discuss their cover story on Sam Altman for Forbes’ Greatest American Innovators issue. Nieva conducted two in-person interviews with Altman at OpenAI’s offices, along with conversations with figures like Bob Iger and Paul Graham. What emerges is a portrait of a soft-spoken dealmaker whose low-key Midwestern demeanor belies the enormous power he wields.
The opening scene sets the tone: Altman eagerly presented his personal collection of historical artifacts in chronological order, from a 40,000-year-old stone age hand axe to a bronze age sword to the GPU chip that trained an early version of ChatGPT. As Nieva put it, “it was kind of like he was at show and tell.” The progression from primitive tools to AI hardware was clearly intentional, a narrative about humanity’s arc of innovation with himself at its latest frontier.
The Shiny Object Problem
The most persistent criticism of Altman, one he himself acknowledged as “well-deserved,” is his lack of focus. The pattern first surfaced at Y Combinator: after joining the legendary first batch with his location-sharing startup Loopt, Altman rose to president of YC. But when OpenAI started as a side project, it consumed him. YC partners noticed his neglect and forced a choice. He chose OpenAI.
That same pattern is now playing out at OpenAI itself. Beyond the core ChatGPT business, the company is pursuing Sora (video generation), a new browser, enterprise packages, and the Jony Ive hardware collaboration. The $1.4 trillion in data center commitments adds another layer of ambition. Nieva noted that “people are worried that they’re doing too much,” a criticism that maps directly onto the YC episode.
Paul Graham’s early impression offers context: upon meeting the young Altman, Graham said “this must be what Bill Gates was like when he was very young.” Whether that comparison extends to Gates’ legendary ability to focus remains an open question.
The $3 Billion Man Without a Stake
Perhaps the most paradoxical detail about Altman: despite leading one of the most valuable technology companies in the world, he owns no equity in it. Forbes estimates his net worth at $3 billion, all from over 400 personal investments, not from OpenAI.
When asked about this, Altman admitted he “doesn’t have a good answer” and joked that “maybe he should have a stake so he doesn’t get asked that question anymore.” He acknowledged it fuels conspiracy theories. Nieva pointed to the 400+ investments as another data point for the lack-of-focus criticism.
The contrast with Elon Musk is striking: Musk famously demanded an even larger stake in Tesla to stay motivated. Altman runs OpenAI for free.
The Elon Musk Lawsuit
The relationship between Altman and Musk has gone from hero worship to courtroom warfare. Altman helped recruit Musk to invest $38 million in OpenAI’s early days, and described Musk as “a personal hero.” The fallout centers on OpenAI’s 2019 restructuring to add a for-profit arm.
The two sides tell different stories: Musk claims he opposed the restructuring. OpenAI says Musk wanted to run the company, couldn’t, and that’s why he’s angry. The trial is scheduled for April, and Altman tweeted that he’s “very excited to get Elon on the stand and on the record,” calling it “Christmas in April.”
“There is no love lost between them.”
Gradual Release as Safety Strategy
On AI risks, Altman has been “pretty upfront,” acknowledging threats like AI-enabled bioweapons, people forming unhealthy attachments to AI (with wrongful death lawsuits already filed against OpenAI), and job displacement. His response framework is iterative deployment: release AI capabilities gradually, let society adapt in real time, and course-correct based on how people actually use it.
This is also his most criticized position. The approach essentially treats society as a live testing ground, prioritizing reaction over prevention. As Nieva framed it, it’s “rather than sort of thinking about it ahead of time, just sort of see how people react to it.”
The Jony Ive Device
OpenAI’s collaboration with Jony Ive, the designer behind the iPhone, iPod, and Apple Watch, represents Altman’s bet on consumer hardware as a competitive differentiator. While details remain scarce, Altman described the concept as a “friendly little companion” that sits on your desk.
His example was revealing: the device could have observed his office, tracked where his eyes linger, understood his interests, and then proactively suggested which historical artifacts to bring in for the Forbes interview. Future versions might even physically pack and transport objects through robotics.
The ambition is real, but so is the track record of failure: Altman himself backed the Humane AI Pin, which flopped. As Nieva noted, “these things are really hard to just be able to get consumers to actually use.”
The AI Succession Plan
The most striking claim in the entire interview: Altman plans to eventually hand OpenAI’s management to an AI model. Nieva said “he seemed dead serious,” though acknowledged “that could also be hubris, which he has certainly shown in the past.”
This fits OpenAI’s stated vision of building AGI, AI systems as productive as human employees that could eventually run companies. Making OpenAI the first company managed by its own creation would be the ultimate proof of concept. Whether it’s visionary or reckless depends entirely on how much you trust the technology, and the man building it.
Working With Any President
On politics, Altman positioned himself as pragmatic: he’ll work with any president because “it’s the duty for anybody building powerful technology in the US.” He described Trump as easy to work with, contrasting it with the Biden administration, where building out data centers was “very difficult.”
The tension point is internationalism versus nationalism. Altman sees OpenAI as “for the world, for all of humanity,” while Trump’s priority is ensuring America wins. Altman said he understands the president’s position, but the philosophical gap is real.
The Competitive Landscape
OpenAI faces intensifying competition on multiple fronts. Altman famously issued a “code red” over Google’s Gemini 3 quality. Anthropic is gaining ground in the enterprise market, historically OpenAI’s weak spot as a consumer-first company. OpenAI has responded with a new enterprise package and plans to make enterprise a major focus this year.
The Jony Ive device and the push into hardware represent an attempt to compete on a different axis entirely, creating new form factors for AI interaction rather than fighting over model benchmarks.
Some Thoughts
The Forbes profile reveals a leader whose contradictions are becoming harder to reconcile. A few observations:
- Altman’s “release and react” safety philosophy is essentially the opposite of the precautionary principle, and it’s the foundation of OpenAI’s entire deployment strategy. Whether this is responsible innovation or reckless experimentation may be the defining question of the AI era.
- The no-equity paradox deserves more scrutiny. Running the most valuable AI company without financial upside either signals extraordinary mission alignment or suggests the real value is being accumulated elsewhere, across those 400+ personal investments.
- The succession-to-AI plan, if serious, creates a bizarre incentive: the better OpenAI’s models get, the closer Altman gets to making himself obsolete. It’s either the most selfless vision in tech or the most audacious branding exercise.
- The “shiny object syndrome” criticism at YC repeated itself at OpenAI within just a few years. Pattern recognition suggests this is a feature, not a bug, of how Altman operates.