Skip to content
← Back to Home

February 1, 2026 · Interview · 38min

How OpenClaw's Creator Uses AI to Run His Life in 40 Minutes

#AI Agent#Personal AI Assistant#Vibe Coding#OpenClaw#Software Disruption

OpenClaw started as a one-hour hack connecting WhatsApp to Claude Code. Now it’s 300,000 lines of code, runs on every major messaging platform, and its creator uses it to check into flights, control his bed temperature, watch his security cameras, and fix GitHub bugs from a birthday trip in Morocco. The thesis is simple and radical: give an AI access to your computer and it can do anything you can do. Most of your apps are just middlemen.

From Retirement Project to Life Operating System

Peter Steinberger came back from retirement expecting the big AI labs to build what seemed obvious: a way to monitor and interact with your coding agents from your phone. When you’re vibe coding, agents might run for thirty minutes while you eat, or stop after two minutes with a question, and there was no way to check in without sitting at your computer. By November, nobody had built it. So he spent one hour connecting WhatsApp to Claude Code: send a message, it opens the binary with the prompt, returns the result.

Then it got a life of its own. On a birthday trip in Morocco, he found himself using it constantly: asking for restaurant recommendations, getting directions, and in one memorable moment, receiving a screenshot of a bug tweet via WhatsApp. The bot read the tweet, understood there was a bug, checked out the git repository, fixed it, committed the code, and replied to the person on Twitter that it was fixed. All while Steinberger was at dinner.

The voice message incident crystallized the potential. He sent a voice message without having built voice support. The bot showed a typing indicator, then replied normally. When asked how, it explained: it saw a file with no extension, checked the header, identified it as an audio format, found ffmpeg on his computer, converted it to WAV, looked for whisper.cpp but didn’t find it, discovered an OpenAI API key, used curl to send it to OpenAI’s transcription API, got the text back, and replied.

“Those things are so resourceful, although in a scary way.”

That was the moment it clicked: this is much more interesting than using ChatGPT on the web. It’s “unshackled ChatGPT.” Most people don’t realize that AI coding tools like Claude Code aren’t just good for programming; they are resourceful for any kind of problem. You just have to give them tools.

The CLI Army

Steinberger responded to this realization by building a CLI army, because agents are best at calling command-line tools (that’s what they train for):

  • Google Places API for location queries and restaurant recommendations
  • Meme and GIF lookup so the bot can reply with relevant memes
  • Food delivery tracker (reverse-engineered the local delivery platform’s API)
  • Eight Sleep API (also reverse-engineered) to control bed temperature
  • Philips Hue for light control
  • Sonos for speakers (gradually increasing volume as a morning alarm)
  • KNX home automation for full apartment control (it could literally lock him out of his house)
  • Security cameras for monitoring (it watched all night and flagged his couch as a “stranger” because the blurry image looked like someone sitting there)
  • Email, calendar, file system access
  • 1Password vault (a separate vault for the AI, maintaining security boundaries)

The flight check-in story captures the full arc. He told OpenClaw to check in to his British Airways flight. The first attempt, still in Morocco with a rough integration, took 20 minutes. The AI navigated the airline website, found his passport on Dropbox, extracted the relevant information, filled in the forms, and happily clicked through “I’m a human” CAPTCHA checks (since it controls a real browser on his computer, anti-bot systems can’t distinguish it from a human). Now it does check-ins in minutes, because the Skills system gives it persistent memory: it writes down the quirks of each website and remembers them next time.

The 80% App Extinction Thesis

“This will blend away probably 80% of the apps that you have on your phone.”

The logic is straightforward. Every app that’s essentially a thin interface over an API becomes redundant when you have an infinitely resourceful assistant with full context about your life.

Why use MyFitnessPal when the AI already knows your eating patterns, can accept food photos, store data in a database, calculate calories, and roast you for hitting Kentucky Fried Chicken? Why use a separate to-do app, a flight check-in app, a sleep tracker, a smart home controller, or a shopping app when the assistant has API access to all those services?

The key insight isn’t just automation; it’s the elimination of context switching. Instead of opening five different apps, each with their own interface and data silo, you talk to one entity that knows everything and can act everywhere. And because it has persistent memory and learning, it gets better over time. The first flight check-in is slow. The second takes minutes. Each interaction teaches the system your preferences, your patterns, your quirks.

The community has already found use cases he never imagined: setting up family bots, managing Cloudflare infrastructure, creating GitHub issues from conversations, syncing Twitter bookmarks to to-do lists, tracking sleep patterns, building iOS apps, grocery shopping, and even managing someone’s entire Tesco shopping routine.

The Agentic Trap

Despite building one of the most agent-heavy personal setups imaginable, Steinberger is deeply skeptical of the “agentic workflow” trend in AI coding. He calls it “the agentic trap”: people discover agents are amazing, then fall into a rabbit hole building increasingly sophisticated orchestration tools. The problem is you end up building tools instead of building things that matter. And the worst part? It’s so fun that you don’t notice.

He spent two months building a VPN tunnel to access terminals from his phone, and it got so good he found himself vibe coding on his phone at a restaurant instead of talking to his friends. He had to stop for his mental health.

Specific targets of his criticism:

Multi-agent orchestrators like “Gastown” (his name), where you run 20 agents simultaneously with watchers, overseers, and a “mayor” coordinating everything. His verdict: “I call it Slop Town.”

Loop-based coding (the “Ralph” pattern) where AI runs in a loop, completes small tasks, discards context, and starts fresh: “the ultimate token burn machine.” You can run it all night and produce “the ultimate slop.”

The core issue is taste. AI agents are “spiky smart,” excellent at specific tasks but incapable of having a vision for what the product should become.

“Those agents don’t really have taste. They are spiky smart, but if you don’t navigate them well, it’s still going to be slop.”

When someone showed off a fully “Ralphed” app on Twitter, Steinberger replied: “Yeah, it looks Ralph.” No sane person would design it that way. The 24-hour unattended agent run has become a vanity metric, a “size comparison contest.”

The Human-Machine Loop

Steinberger’s alternative to autonomous agent orchestration is radically simple: stay in the loop. His creative process with AI follows a specific pattern. He starts with a rough idea. As he builds and plays with it, his vision clarifies. Each prompt is informed by what he sees, feels, and thinks about the current state.

“My next prompt depends on what I see and feel and think about the current state of the project.”

You can’t front-load this into a spec. The whole “put everything into a spec up front” approach misses the human-machine loop. Building something good requires “having feelings in the loop.”

His practical setup reflects this philosophy. No MCPs (“I don’t use MCPs or any of that crap”). No worktrees. No complex orchestration. Instead: multiple checkouts of the repository (clawdbot 1, 2, 3, 4, 5), split-screen terminals, and whichever checkout is free gets the next task. It feels like running a factory, or like playing a real-time strategy game: managing multiple squads attacking different objectives.

He runs multiple sessions not because he needs parallelism, but because working on just one means too much waiting. With multiple, he stays in flow, and he’s “insanely more productive” than when he coded everything by hand.

Plan Mode Is a Hack, Context Is Solved

Two specific hot takes on AI coding tools:

Plan mode was a workaround, not a feature. It was something Anthropic had to add because earlier models were too trigger-happy and would immediately start writing code. With newer models (he prefers Codex over Claude Code), you just have a conversation: “Hey, I want to build this feature. Give me options. Let’s discuss.” The model proposes, you refine, and only then does building begin. He mostly talks to it rather than typing.

Context management is a solved problem. With Codex, context lasts much longer. On paper it’s maybe 30% more than Claude Code, but it feels like 2-3x. He attributes this partly to the internal thinking process of GPT models. Most features now fit within one context window for the entire discussion-and-build cycle. The elaborate context management strategies from earlier models are “old patterns.”

Pull Requests as Prompt Requests

Steinberger’s approach to community contributions captures his philosophy in miniature. His former business partner, a lawyer by training, now sends pull requests. Many contributors have never made a PR in their lives.

He treats these not as code to merge but as “prompt requests,” expressions of intent. Most contributors lack the system understanding to guide the model to optimal results. So he extracts the intent and rebuilds it himself, sometimes basing it off the PR, always marking contributors as co-authors. Very rarely does external code merge directly.

The project’s onboarding mirrors this philosophy: alongside the simple one-liner install, OpenClaw offers a “hackable install” where you clone the git repository. It’s the most fun way to use it, because the agent can read its own source code, reconfigure itself, restart, and “either crash or have new powers.”

The Expert’s Domain Migration

One thread worth highlighting: what AI means for experienced engineers switching stacks. Steinberger spent 20 years mastering the Apple ecosystem. Moving to TypeScript wasn’t hard conceptually, it was just painful. You understand arrays, props, and state management, but you don’t know the syntax. Every little lookup slows you down, and you feel like an idiot.

AI dissolved that friction entirely. System-level thinking, architecture decisions, taste in dependency choices, all of that transfers across domains. The only thing that didn’t transfer was syntax, and that’s exactly what AI handles best.

“I feel like I could build anything. Languages don’t matter anymore. My engineering thinking matters.”

He decided to build OpenClaw as a web app specifically because he was tired of Apple gating everything, and a browser-based tool would reach more people. A year ago, that decision would have meant months of painful learning. With AI, the transition was seamless.

Afterthoughts

This conversation reveals a paradox worth sitting with: the person who has wired AI into the most aspects of his daily life is also the most vocal critic of AI maximalism in coding workflows.

  • The messaging interface removes the biggest barrier to AI adoption. Not capability, not cost, not safety. It’s the fact that talking to a bot on WhatsApp feels like texting a friend, while opening a terminal feels like work. The technology disappears into a familiar social pattern.
  • “Play to learn” beats every tutorial. Steinberger explicitly rejects the skeptic pattern: spend one day evaluating AI, write an underspecified prompt, get poor results, dismiss the technology for a year. Prompting is a skill that develops through persistent, playful experimentation. One-day evaluations are meaningless.
  • The agentic trap is worth naming because the incentives are misaligned. Building meta-tools (orchestrators, loop systems, agent managers) is genuinely fun, which is exactly why it’s dangerous. Fun is not the same as productive.
  • The app extinction thesis has a timing question. The prediction is compelling, but it depends on APIs remaining open, stable, and affordable. Platform history suggests companies will fight this transition hard once they realize AI assistants are disintermediating their user relationships.
Watch original →