Skip to content
← Back to Home

January 27, 2026 · Speech · 59min

OpenAI Town Hall with Sam Altman

#AI Entrepreneurship#Software Engineering Shift#AI Safety#OpenAI#Vibe Coding

Sam Altman’s most revealing line in this town hall isn’t a prediction about AGI or a product announcement. It’s a confession: he gave Codex full unsupervised access to his computer within two hours of first using it, and never turned it off. That single behavioral data point tells you more about where we’re headed than any roadmap.

The Town Hall

OpenAI gathered builders, founders, and students for a live Q&A with Sam Altman in late January 2026. The format was open, the audience was technical, and the conversation ranged from Jevons paradox to kindergarten education policy. Altman was unusually candid, admitting mistakes (GPT-5’s writing quality), sharing internal numbers (100x cost reduction timeline), and offering genuinely uncertain answers about biosecurity and AI alignment. Several OpenAI team members also chimed in on agent capabilities.

The session had a clear subtext: OpenAI wants to know what to build for developers, and they’re willing to say “we screwed that up” about recent releases in exchange for honest feedback.

More Engineers, Not Fewer

The first question hit the Jevons paradox head-on: if AI makes code dramatically cheaper, does that kill software engineering jobs?

Altman’s answer was firm. What it means to be an engineer will “super change,” but far more people will be creating far more value. The historical pattern is consistent: every time engineering tools got better, more people joined and became productive. Demand for software isn’t slowing down at all.

The more interesting prediction: most of us will use software written for one person or a very small number of people, constantly customizing our own tools. If you count that as software engineering, a greater percentage of world GDP will be created and consumed this way.

“By the end of this year, for a hundred or a thousand dollars of inference and a good idea, you will be able to create a piece of software that would have taken teams of people a year to do.”

GTM Is the Real Bottleneck

Building got easy. Distribution didn’t. Altman, drawing on his Y Combinator years, noted that founders always thought the hard part would be building the product, when it was always getting anyone to care.

The delta is now even more pronounced. AI can automate sales and marketing to some degree, but human attention remains the scarce commodity. Even in a world of radical abundance, you’re still competing for people’s busy schedules.

His framework: all the old rules of building a successful startup still apply. GTM, sticky value, competitive advantage, network effects. None of that has changed. The fact that AI makes creation easier doesn’t make the rest easier.

The Agent Interface Is Wide Open

A solo developer building multi-agent orchestration asked Altman whether OpenAI would eat his market. Altman’s response was notably humble: “We don’t know what the right interface for all of this is going to be.”

The spectrum is wide. Some people want 30 screens of complex agent dashboards. Others want a calm voice conversation where they prompt the computer once per hour. The overhang of model capability relative to what people can actually extract from them is “huge and growing.” Someone will build the tool that bridges that gap, and OpenAI hasn’t figured it out either.

”We Just Screwed That Up”

Asked about GPT-5’s regression in writing quality compared to 4.5, Altman was blunt: “I think we just screwed that up.” OpenAI put most effort into 5.2’s intelligence, reasoning, and coding at the expense of writing. Limited bandwidth, hard tradeoffs.

His belief: the future is general-purpose models that excel across all dimensions. Intelligence is “a surprisingly fungible thing.” Good writing in the sense of clear thought, not beautiful prose, should be achievable alongside good coding. They plan to catch up on writing quality in future 5.x releases.

Intelligence Too Cheap to Meter

An always-on AI company asked about cost trajectories. Altman’s prediction: GPT-5.2-level intelligence will be available at “at least 100x less” cost by end of 2027.

But he flagged a dimension that’s been under-discussed: speed versus cost. Many users now want the same output at 100x the speed, even at a higher price. These are very different engineering problems, and OpenAI hasn’t invested as much in the speed axis yet.

Software Written Just for You

Altman described his own shift in how he uses computers: he no longer thinks of software as static. If he has a small problem, he expects the computer to write code and solve it immediately.

He doesn’t think every document editor will be rewritten on the fly, because we value familiar interfaces. But he does expect software to be increasingly customized for individual users. Internally at OpenAI, everyone using Codex has their own custom workflows, using things in completely different ways.

The broader prediction: our tools will constantly evolve and converge just for us. This seems “guaranteed” to happen.

The GPT-6 Test for Startups

Altman’s framework for startup durability: ask yourself whether your company would be happy or sad if GPT-6 is a wildly impressive update. Build things where you’re hoping the model gets wildly better. There are plenty of such opportunities, and they’re more sustainable than building patches around current model limitations.

Scientists and “Unlimited Postdocs”

Altman shared that with a special internal version of 5.2, OpenAI is hearing from scientists for the first time that “the scientific progress of these models is no longer super trivial.”

The best researchers using AI don’t run fully autonomous loops. They do something different from the model: providing intuition, judgment, and creative direction. Altman compared it to the chess period after Deep Blue beat Kasparov, where human-plus-AI briefly outperformed AI alone before AI pulled ahead entirely. He suspects a similar trajectory for scientific research.

The emerging workflow: researchers use AI as “unlimited postdocs” to do breadth-first search across 20 new problems simultaneously, rather than going deep on any single one. The human’s role is choosing which directions feel right.

“A model that can come up with new scientific insights is not also incapable, with a different harness and trained a little bit differently, of coming up with new insights about products to build.”

The Paul Graham Bot

When a GTM consultant pointed out that many AI products simply aren’t worth users’ attention, Altman pivoted to an interesting idea: tools that improve the quality of ideas, not just the speed of execution.

He described three or four people in his life who consistently generate good ideas through questioning and seeding. Paul Graham is “off the charts amazing at this.” If AI could replicate that brainstorming partnership, even if you reject 95 out of 100 suggestions, it would be “a very significant contribution to the amount of good stuff that gets built in the world.”

Biosecurity: From Blocking to Resilience

A Stanford biosecurity researcher got Altman’s most candid risk assessment. The models are “quite good at bio,” and the current strategy of restricting access and using classifiers to prevent pathogen design “isn’t going to work for much longer.”

The needed shift: from blocking to resilience, analogous to fire safety history. Co-founder Wojciech Zaremba’s analogy: fire brought enormous benefits, then started burning down cities. The word “curfew” literally comes from when cities banned fires. Then humanity got fire codes and flame-resistant materials instead.

Altman’s stark warning: “If something goes really wrong, like visibly really wrong for AI this year, I think bio would be a reasonable bet for what that could be.”

Sleepwalking Into Trust

Asked about the most underestimated failure mode for production agents, Altman told his Codex confession story. He was “so confident” he’d never give an agent unsupervised access. Two hours later, he’d turned on full access and never turned it off.

The pattern: the power and delight of these tools is so great that people get pulled into not thinking enough about security and complexity. Capability rises steeply, we decide we trust models at a certain level, and without building adequate security infrastructure, “we will sleepwalk into something.”

“The general worry I have is that capability is going to rise very steeply. We’re going to get used to how the models work at a certain level and decide we trust them, and without building very good big-picture security infrastructure around it, we will sleepwalk into something.”

Companies Must Adopt or Die

On hiring: OpenAI is “planning to dramatically slow down how quickly we grow” because they expect to do much more with fewer people. They won’t hire aggressively then have to do layoffs when AI takes over tasks.

But Altman made a more provocative point about the broader economy: if companies don’t adopt AI aggressively, they will eventually be outcompeted by “a fully AI company” that’s just a rack of GPUs and no people. He acknowledged this sounds self-serving coming from OpenAI, but called it genuinely important for societal stability.

The new engineering interview at OpenAI: sit someone down with a task that would have been impossible for one person in two weeks a year ago, and watch them do it in 10 to 20 minutes.

Education: Nuance Over Hype

Altman compared AI in education to Google in classrooms. His middle school teachers tried to make kids promise not to use Google, fearing it would make history class irrelevant. He thought that was insane then and feels the same about banning AI tools now.

But he drew a sharp line at young children. He’s “a fan of keeping computers out of kindergarten.” Kindergarteners should be running around outside and playing with physical things. His suspicion: technology’s impact on young children has been “even much worse” than social media on teenagers, and it’s still under-discussed.

For college-age builders: “If you are an AI builder, it is probably not the best use of your time to be in university right now.” His own parents took 10 years to stop asking when he’d go back.

The Human Premium in Creative Work

On AI-generated art, Altman cited a fascinating behavioral pattern: consumers report dramatically higher appreciation for images when told a human made them. Even people who claim they can spot AI art consistently rank AI images at the top in blind tests, then reverse their preference once told.

His takeaway: we care deeply about other people and very little about machines. Even a little bit of human direction seems to eliminate the negative reaction. This will be “a deep and durable trend” over the coming decades. He compared it to his own experience finishing a novel: the first thing he wants to do is look up the author and understand their life. If he learned a great novel was written by AI, he’d feel “kind of sad and crestfallen.”

Memory, Privacy, and the Whole Computer

Altman’s personal evolution on AI memory: he’s now ready for ChatGPT to “just look at my whole computer and my whole internet and just know everything.” The utility is too high to resist.

But he isn’t ready for always-on recording glasses, and he doesn’t want to manually categorize work versus personal memories. The ideal is for AI to have such a deep understanding of the complex rules and hierarchy of your life that it knows what to expose where, automatically.

A developer requested “sign in with my ChatGPT account” for third-party apps. Altman confirmed they’re building it. Token budgets and model access portability are coming first. Deeper information sharing (letting apps access your ChatGPT memories) is “very scary” because models aren’t yet good enough at understanding social nuances around when to share what.

Most Important Skill: High Agency

A Vietnamese high school student asked what skill matters most in the age of AI. Altman’s answer was entirely about soft skills: become high agency, get good at generating ideas, be resilient, be adaptable. None of these are “go learn to program,” which used to be the obvious answer.

He added a surprising observation from his YC days: these qualities are all learnable. A three-month boot camp can make people “extremely formidable” across all these axes. That was a big update to his worldview.

Some Thoughts

This town hall was notable less for product announcements and more for the candor about uncertainty. Altman admitted to screwing up GPT-5’s writing, confessed to immediately abandoning his own security protocols, and gave genuinely unsure answers about scientific autonomy and biosecurity timelines.

  • The Jevons paradox framing may be comforting, but Altman’s own company is slowing hiring. The argument that “more people will do engineering” is different from “current engineers keep their current jobs.”
  • The biosecurity answer was genuinely alarming. “Blocking won’t work much longer” plus “bio would be a reasonable bet for what goes wrong this year” is a specific, near-term risk assessment from someone with inside knowledge of model capabilities.
  • The sleepwalking-into-trust pattern Altman described about himself applies to the entire industry. The failure mode isn’t a dramatic AI rebellion; it’s the gradual, convenience-driven erosion of safeguards that nobody notices until something breaks.
  • “Intelligence too cheap to meter” at 100x cost reduction by 2027 implies a world where the bottleneck shifts entirely from computation to taste, judgment, and knowing what to ask for.
Watch original →