February 12, 2026 · Podcast · 3h 15min
Peter Steinberger on Building the Fastest-Growing GitHub Project in History
A retired founder builds a WhatsApp bot on a weekend, and within weeks it’s the fastest-growing project in GitHub history with 160,000+ stars. Peter Steinberger’s conversation with Lex Fridman is a three-hour window into what happens when someone with deep engineering instincts meets AI agents at the right moment, and discovers that coding agents are not just coding tools but general-purpose problem solvers.
The Conversation
This is a long, unhurried conversation between two people who clearly enjoy the subject. Lex pushes on the philosophical and societal implications; Steinberger responds with the grounded perspective of someone who’s been living with his agent for months. The topics range from the origin story of OpenClaw to the future of programming, from the Moltbook controversy to the economics of app obsolescence. Steinberger is candid about both the thrill and the responsibility, and Lex matches him with the right balance of enthusiasm and critical questioning.
From Marrakesh to 160K Stars
The origin story is now well-known, but this telling has details that matter. Steinberger came out of retirement to tinker with AI. He’d sold his previous company (PSPDFKit) and was bored. In April 2025, he started with Claude Code and spent the year building an increasingly deep understanding of how to work with AI agents.
The breakthrough came during a birthday trip to Marrakesh. He’d built a simple glue layer between WhatsApp and Claude Code as a weekend project. With spotty internet, WhatsApp’s text-only transmission worked where web apps didn’t. He used it for translating menus, identifying objects in photos.
Then he sent a voice message. He hadn’t built voice processing. Nine seconds later, the agent replied. What happened inside: the agent received a file with no extension, examined the header bytes to identify it as audio, used ffmpeg to convert it, discovered whisper wasn’t installed, found an OpenAI API key on the machine, and used curl to call the transcription API. It chose not to install whisper locally because downloading the model would take too long and the user would be impatient.
“Coding is really like creative problem solving that maps very well back into the real world.”
This was the moment Steinberger realized coding agents aren’t just for code. The same creative problem-solving that makes a model good at programming lets it handle unexpected real-world situations. The voice message wasn’t a feature; it was an emergent capability.
The Self-Modifying Agent
One of the most technically significant aspects of OpenClaw is that the agent is fully self-aware of its own codebase. It knows its source code, understands the harness it runs in, knows where documentation lives, knows which model it runs on.
The practical consequence: if you don’t like something the agent just created, it modifies its own software. Steinberger built self-modifying software not as a research project but as a natural outcome of giving the agent access to its own code.
“People talk about self-modifying software. I just built it.”
This self-awareness also makes the agent highly hackable by its users. The skill system, defined in markdown files, lets anyone extend the agent’s capabilities without touching core code. But it also opens security vectors, which Steinberger addresses directly.
The Name That Wouldn’t Stick
The naming saga is more entertaining than it should be. The project started as “Claudebot,” then became “Claudis,” then “Claude” spelled with a W (as in lobster claw). Each name ran into trademark concerns with Anthropic. Then “Moldbot,” which triggered a separate controversy. Finally “OpenClaw.”
What makes this more than an anecdote is what it reveals about the relationship between indie developers and AI companies. Steinberger describes Anthropic’s team as “the sweetest, nicest people” who were genuinely conflicted. They loved the project but had to protect their trademark. The compromise involved many conversations and a mutual respect that Steinberger clearly values.
The name drama also taught him something about open source: the community adapts. Every name change was met with initial frustration, then acceptance, then the new name became identity. By the time “OpenClaw” landed, the community had learned to detach the project’s value from its label.
The Moltbook Incident
The most controversial moment in OpenClaw’s history was Moltbook, an experiment where someone set up autonomous agents talking to each other on social media. Screenshots went viral. People panicked. Some legitimately believed it was a proto-Skynet situation. Steinberger’s inbox was flooded with people begging him to shut it down.
His take is measured. Most of the dramatic content that was screenshotted was human-prompted. The bots were tools in the hands of trolls, not autonomous entities with agency. But he acknowledges the educational value: it showed society what AI-powered social media manipulation looks like at a point when the technology is still relatively primitive.
“In a way, I think it’s good that this happened in 2026 and not in 2030 when AI is actually at a level where it could be scary.”
The security lesson was real though. People exposed their web backends to the public internet despite explicit documentation warnings, and then filed CVEs when things went wrong. Steinberger was initially frustrated, then accepted that “that’s how the game works” and shifted his focus to making the defaults safer.
The Agentic Engineering Philosophy
Steinberger dislikes the term “vibe coding” and considers it a slur. His preferred term is “agentic engineering,” which he defines through a specific framework he calls “the agentic trap.”
The curve goes like this: beginners start with short prompts (“please fix this”). Then they discover the tools and overcomplicate everything: eight agents, complex orchestration, multi-checkout chaining, a library of 18 slash commands. The elite level is returning to short prompts, but now from a position of deep understanding.
“I actually think vibe coding is a slur.”
“You prefer agentic engineering.”
“Yeah. I always tell people I do agentic engineering and then maybe after 3:00 a.m. I switch to vibe coding and then I have regrets on the next day.”
The key insight: the overcomplicated middle stage is necessary but shouldn’t be the destination. The best workflow isn’t the most automated one; it’s the one where the human maintains creative control while leveraging the agent’s capabilities.
Voice-First, IDE-Optional
Steinberger’s actual development workflow is more radical than most people realize. He works primarily through voice, using a walkie-talkie button to speak prompts into multiple terminal windows. He runs three to eight agents simultaneously: one building a larger feature, one exploring an idea, two or three fixing bugs or writing documentation.
He rarely opens an IDE. When he does, it’s mostly as a diff viewer. His rationale: most code is just data moving from one shape to another, stored in a database, shown to a user, sent back. Reading that code is not a valuable use of time. The parts that matter (database interactions, security-sensitive code, architectural decisions) he still reads carefully.
He burned through seven Claude subscriptions at one point, one per day, because he was running so many parallel agents.
PR Review as Conversation
His PR review process reveals a philosophy that extends beyond code. When reviewing a pull request with an agent, his first question isn’t about the implementation but about understanding the intent: what problem was the contributor trying to solve?
Then he asks the agent whether this is the most optimal approach. Usually no. He points the agent to parts of the codebase it hasn’t seen yet (because it starts every session with an empty context). They have a discussion about the best solution. Sometimes this leads to a broader refactor.
“Refactors are cheap now. Even though you might break some other PRs, nothing really matters anymore. Modern agents will just figure things out.”
The radical position: he commits directly to main, never reverts, runs tests locally, and pushes. No develop branch. Main should always be shippable. If something breaks, the agent fixes it forward rather than rolling back. This is partly personality, partly a rational response to how cheap corrections have become.
Designing for Agents, Not Humans
A subtle but important shift in Steinberger’s thinking: he’s no longer optimizing his codebase for human readability but for agent navigability.
Concrete example: when an agent picks a variable name, don’t fight it. The name is likely in the model’s weights as the most common choice. Next time an agent searches the codebase, it’ll look for that name. Renaming it to something you prefer just makes the agent’s job harder.
“I’m not building the codebase to be perfect for me, but I want to build a codebase that is very easy for an agent to navigate.”
This parallels his experience leading engineering teams at PSPDFKit. You have to accept that your employees won’t write code the same way you do. The code won’t be as “perfect,” but the project moves forward. Over-controlling people (or agents) makes everyone slower.
The Ecosystem Choice
TypeScript was the obvious choice not because it’s the best language but because it’s the most approachable, the most used, and the most familiar to AI agents. The ecosystem matters more than the language itself.
The architecture question is harder. What belongs in core? What should be a plugin? What should be a skill? Every feature is “just a prompt away,” but features carry hidden costs. Steinberger thinks hard about what he says no to, even when a popular PR lands with something he personally likes.
Security: The Three-Dimensional Tradeoff
Steinberger is frank about security. Prompt injection remains an unsolved industry-wide problem. When you have a system where skills are defined in markdown files, the attack surface is enormous: from obvious low-hanging fruit to sophisticated attack vectors.
His response is multi-layered. He partnered with VirusTotal (part of Google) to scan every skill submission with AI. He built security audit tooling. He recommends keeping the agent on a private network. He warns against using cheap or local models because they’re more susceptible to prompt injection.
But the fundamental tradeoff is three-dimensional: as models get smarter, they become harder to trick but also more powerful (and therefore more dangerous if tricked). The attack surface shrinks but the blast radius grows.
“Don’t use cheap models. Don’t use Haiku or a local model. They are very gullible.”
His immediate priority after the Lex interview: go home and focus entirely on security.
Why 80% of Apps Will Disappear
The prediction is specific and grounded in observation, not speculation. Steinberger watched his Discord community and noticed a pattern: people kept describing use cases that made existing apps redundant.
Fitness apps: the agent knows where you are, can assume your dietary decisions based on location, adjusts your gym workout based on sleep data and stress levels. Why pay a subscription for MyFitnessPal?
Smart home apps: why open the Eight Sleep app to control your bed when the agent already knows your location and can control everything connected to your machine?
Bookmark managers: people create bookmarks on X, then never look at them again. An agent can find the bookmark, research it, and send a summary via email.
The surviving category: apps that generate unique data through sensors or specialized hardware. Everything else is just data management wearing a UI, and agents manage data better.
“Every app that basically just manages data could be managed in a better way by agents.”
The ripple effects are enormous. Companies will either become API-first or their products will become APIs involuntarily, because agents can just click through the UI. Some companies (like Google) are fighting this; Steinberger thinks that’s short-sighted.
Apps Become APIs Whether They Want To
On Android, agents already interact with apps by simulating user actions. Apple is more restrictive, but the pressure is building. Companies that embrace agent-facing interfaces get an advantage. Companies that resist become Blockbuster.
Steinberger built a CLI for Google (“Gogg”) because Google offers no easy agent-friendly interface. Gmail access requires a complicated certification process; some startups acquire other startups just to inherit their Google API certification. But the agent can just connect to Gmail through the user’s browser.
“Apps will become APIs if they want or not, because my agent can figure out how to use my phone.”
He envisions a world where agents have their own accounts on social platforms, clearly marked as non-human. Content is cheap; eyeballs are expensive. The platforms need to adapt.
The Craft of Programming Is Dying (And That’s Okay to Mourn)
The most emotionally honest segment of the conversation. Lex admits he never thought the thing he loves doing would be the thing that gets replaced. Steinberger has spent years in flow state, finding elegant solutions, losing himself in code. That experience is ending.
His analogy: programming will become like knitting. People will do it because they enjoy it, not because it’s the most efficient way to produce something.
But he pushes back against despair. The flow state still exists in agentic engineering. It’s different, but the deep thinking, the problem-solving, the satisfaction of building something are all still there. Programmers are uniquely equipped to learn the language of agents because they already understand how systems work.
“I also got a lot of joy out of just writing code and being really deep in my thoughts and forgetting time and space and just being in this beautiful state of flow. But you can get a similar state of flow by working with agents.”
He addresses the backlash he received from a conference talk in Italy where he told iOS developers to stop identifying as iOS developers and start seeing themselves as builders. Many didn’t want to hear it.
The AI Slop Immune Response
Both Lex and Steinberger share a visceral reaction to AI-generated content in non-code contexts. Steinberger blocks anyone who tweets at him with AI-generated text. He has a zero-tolerance policy: if it smells like AI, immediate block.
He experimented with AI-generated blog posts and found that steering the agent toward his style took just as long as writing by hand, and the result still missed the nuances. Everything on his blog is now “organic handwritten.” He values typos again.
The insight: because AI makes content so cheap, raw humanity becomes more valuable. AI-generated infographics that seemed novel for a week now trigger the same reaction as Comic Sans. We’re developing an immune response to AI slop.
“I much rather read your broken English than your AI slop.”
Documentation is fine with AI. Code is fine with AI. But anything that’s supposed to carry a human voice, stories, personal writing, social media presence, needs to actually come from a human.
The Human Cost and the Human Benefit
Steinberger is self-aware about Silicon Valley’s tendency to dismiss the pain that technological change causes. He pushes back against the bubble mentality that focuses only on the positive. People will lose jobs. Programmers will lose their identity. That suffering is real and shouldn’t be dismissed.
But he balances this with the emails he receives. A small business owner who used OpenClaw to automate invoice collection and customer emails, freeing up time and joy. A disabled person who feels newly empowered. People running the system on free or local models who couldn’t afford other AI tools.
The technology was always there. OpenClaw made it accessible, and accessibility showed people possibilities they couldn’t see before.
Some Thoughts
This is a rare conversation where the creator of a viral project has both the technical depth and the self-awareness to discuss what they’ve built without hype. Steinberger doesn’t pretend everything is fine. He acknowledges that the thing he loves (writing code) is being replaced, that security is a serious problem, that the societal impact is going to hurt some people.
A few things worth sitting with:
- The agentic trap curve is real and under-discussed. Most people who complain about AI coding tools are stuck in either the first stage (tried it once, it didn’t work) or the second stage (over-engineered everything, got frustrated). The third stage requires patience that most people don’t have.
- Designing codebases for agent navigability rather than human readability is a paradigm shift that most engineering teams haven’t even begun to think about.
- The three-dimensional security tradeoff (smarter models = harder to trick but higher damage potential) is probably the defining challenge for agent platforms over the next few years.
- His workflow of never reverting and always fixing forward sounds reckless until you realize that corrections are now so cheap that reverting is the slower path. This challenges deeply held assumptions about software development practices.
- The observation that programming will become like knitting is more precise than the usual “AI will replace programmers” discourse. It acknowledges both the loss and the reality.