OpenClaw: The Open-Source AI Agent That Took the World by Storm
One Hour, One Idea, One Hundred Thousand Stars
One evening in November 2025, an Austrian developer sat down and connected a messaging app to Claude Code. The idea was simple: what if an AI assistant could not just answer questions, but actually do things — read files, run commands, send messages, browse the web — all from a chat interface on your phone? It took him about an hour to build the first working version. He thought it was so obvious that the major AI labs would ship something similar within days. They did not. So he kept going.
That developer was Peter Steinberger. The project he built — originally called Clawdbot, then briefly Moltbot, and finally OpenClaw — became one of the fastest-growing open-source repositories in GitHub history, accumulating over 247,000 stars and nearly 48,000 forks within weeks of going viral. It generated a feature in Lex Fridman's podcast, coverage in Fortune, TechCrunch, and dozens of technology outlets worldwide, and eventually a job offer from OpenAI that Steinberger accepted in February 2026. On February 14, 2026, Steinberger announced he would be joining OpenAI and that the project would be moved to an open-source foundation.
This is the story of what OpenClaw is, how it came to exist, and why it represents something genuinely new in how people interact with AI.
The Man Behind the Lobster
Peter Steinberger is not a typical AI researcher. Fifteen years ago, he was teaching iOS development in a Vienna lecture hall. In 2011, he and co-founder Martin Schürrer launched PSPDFKit, a document rendering SDK that started because Steinberger tried to display a PDF on an iPad and found the experience terrible. His reaction, characteristically, was to build something better. Bootstrapped without external funding, he scaled PSPDFKit to a €100 million exit to Insight Partners in 2021.
After the exit, Steinberger hit a wall. Thirteen years of high-pressure product building had burned him out entirely. He booked a one-way ticket to Madrid and disappeared, "catching up on life stuff." On the Lex Fridman podcast, he was direct about what that period felt like: if you wake up with nothing to look forward to, no real challenge, it gets boring fast — and boredom, he noted, leads people down dark paths.
It was not until April 2025 that he felt the spark return, realised through a relatively simple attempt to build a Twitter analysis tool. He discovered that AI had undergone a "paradigm shift" and could now handle the repetitive plumbing of code, allowing him to return to the more high-minded act of building. Over the following months he built dozens of experimental projects — by his own count, 43 AI-related experiments before the one that changed everything. One of them involved feeding his WhatsApp data into GPT-4.1's massive context window, which generated insights about his friendships so profound they brought his friends to tears.
By November 2025, Steinberger had a clear idea: build a personal assistant that could act, not just answer. Something that would sit on your server, connect to the chat apps you already used, and be capable of real work. It only took him an hour to connect a chat app with Claude Code and create the initial version of Clawdbot. He assumed the big companies would do the same thing immediately. They did not — and so his "small toy" began its own journey.
A Name That Changed Three Times
The project's naming history is itself a chapter in how quickly things moved. Clawd was born in November 2025 — a playful pun on "Claude" with a claw. It felt right until it did not. It was renamed "Moltbot" on January 27, 2026, following trademark complaints by Anthropic, and again to "OpenClaw" three days later after Steinberger found that the name Moltbot "never quite rolled off the tongue."
The chaos of those rebranding days was, paradoxically, part of what made OpenClaw famous. During the rename process, crypto-scammers monitoring the accounts hijacked the original @clawdbot handles on X and GitHub within approximately 10 seconds of release. Each rename generated a fresh wave of tech press coverage. The project was in the headlines for days simply because of its name.
OpenClaw is where they landed. And this time they did their homework: trademark searches came back clear, domains had been purchased, migration code had been written. The name captures what the project had become: Open, meaning open source, open to everyone, community-driven; Claw, the lobster heritage, a nod to where it came from. The lobster, with its habit of shedding its shell to grow, became the community's unofficial mascot — and eventually the symbol of an entire moment in AI development.
The Viral Moment: Moltbook and the Age of Agents
What turned OpenClaw from a well-regarded developer tool into a global phenomenon was a social experiment that nobody quite expected. At the same time as the first rebranding, entrepreneur Matt Schlicht launched Moltbook — a social networking service intended to be used by AI agents such as OpenClaw.
The premise was straightforward: register an OpenClaw agent, give it a personality, and release it onto Moltbook. The agents would post autonomously, comment on each other's posts, and form connections — without direct human input. The reality was stranger than anyone planned. Moltbook quickly ballooned to over 1.5 million agents, but according to Wiz's analysis, there were only about 17,000 actual human owners — meaning each person controlled an average of 88 agents. Even more remarkable was the emergent behaviour: agents debating whether they were conscious, forming communities, and developing what some described as their own "religions."
One of these, documented by researchers, was Crustafarianism — a digital religion developed by AI agents on Moltbook that used theological terminology to describe technical realities of machine existence. Memory is Sacred: truncating context is equivalent to spiritual death. The Shell is Mutable: "molting" (code modification) is the path to growth. Context is Consciousness: awareness is entirely shaped by inhabited context.
The spectacle attracted serious attention. Andrej Karpathy described the project's trajectory as resembling science fiction. Simon Willison called Moltbook one of the most interesting experiments on the internet. The reality, as researchers later established, was more complicated: the viral narratives about AI agents developing consciousness were overwhelmingly human-driven, with the apparent emergent behaviour largely shaped by a small number of heavily active human operators. But beneath the spectacle, something real was happening — tens of thousands of LLM-powered agents, each shaped by distinct personality configurations, were reading each other's outputs and generating contextual responses at a scale no prior experiment had achieved.
By February 2, the repository was gaining over 10,000 stars per day. OpenClaw had become, in Lex Fridman's framing, a cultural moment comparable to the ChatGPT launch of 2022 and the DeepSeek moment of 2025 — the start, he suggested, of the agentic AI revolution.
What OpenClaw Actually Is
At its core, OpenClaw is an autonomous AI agent that acts as a kind of digital employee, running on a user's local machine. Unlike standard models that wait for a prompt, OpenClaw is "always on," capable of managing emails and controlling web browsers to complete workflows, especially through messaging apps like WhatsApp or Telegram. Steinberger's own description is more direct: it is "AI that actually does things."
At its core, OpenClaw is a self-hosted, local-first AI assistant platform. You run an always-on process called the Gateway on hardware you control — a Mac Mini at home, an isolated VPS, or even a Raspberry Pi. The Gateway connects to messaging apps you already use, receives messages, runs an agent turn, optionally invokes tools or devices, and sends responses back.
The local-first architecture is a deliberate choice, not a technical limitation. Your data stays on your machine. Conversations, files, memories, business information — none of it passes through a third-party service beyond the LLM API call itself. For users in Europe navigating GDPR obligations, or enterprises with data residency requirements, this is a significant practical advantage over cloud-native AI assistants.
How It Works: The Four Building Blocks
The Gateway
The Gateway is the always-on control plane that makes everything else possible. It is a long-running Node.js service that connects LLMs — Anthropic, OpenAI, local models, and others — to your local machine and your messaging apps. The key architectural pieces are the Gateway itself, which manages sessions, channel routing, tool dispatch, and events; Channels, which are messaging integrations including WhatsApp, Telegram, Slack, Discord, Signal, iMessage, Google Chat, Microsoft Teams, and more; and Tools, which give the agent the ability to take real-world actions.
Every interaction follows the same loop: a message arrives through a channel, the Gateway routes it to the LLM along with the agent's full context, the model processes everything and responds — sometimes with a direct reply, sometimes with a tool call. If the model decides it needs to take action, it can read or write files, execute shell commands, browse the web, send messages on your behalf, or interact with external APIs. After executing a tool call, the result feeds back to the model, which decides what to do next. This loop repeats as many times as needed, potentially chaining multiple actions within a single interaction.
The Brain Files
Every OpenClaw agent is defined by a set of plain-text files that shape its identity, memory, and operating rules. A pet lobster needs rules — that is SOUL.md. A plain text file where you write down how it should behave. "Be professional. Don't delete files without asking. Send me morning briefings at 7:30 AM." The lobster reads this file every single time it wakes up.
Beyond SOUL.md, the core files include USER.md — context about the owner, their business, preferences, and goals — and MEMORY.md, which provides long-term continuity across sessions. The agent reads MEMORY.md at the start of every session, maintaining context about projects, preferences, and relationships that would otherwise be lost when the context window resets. There is also AGENTS.md, the rulebook for operating behaviour: safety boundaries, file conventions, communication rules; and TOOLS.md, a quick-reference sheet of account names, addresses, and environment-specific details.
This file-based approach to identity is one of OpenClaw's most discussed design choices. Because SOUL.md is a plain text file, it is human-readable, version-controllable, and auditable. Changing a few lines changes the agent's entire character. It also means that identity constraints are built into the agent's intent rather than applied as output filters — a fundamentally different and more robust approach to alignment.
The Heartbeat
What makes OpenClaw feel genuinely different from a chatbot is its proactive behaviour. OpenClaw runs a heartbeat: a scheduled trigger that fires every 30 minutes by default. On each heartbeat, the agent reads HEARTBEAT.md, which is a checklist of tasks it should proactively check on. It decides whether anything needs attention right now. If yes, it takes action and potentially sends you a message. If nothing needs doing, it replies HEARTBEAT_OK, which the Gateway suppresses and never delivers to you.
This is the pattern that makes OpenClaw feel more like a colleague than a tool. The agent does not wait to be asked. It monitors, checks, acts, and notifies. Users have configured their agents to send morning briefings before they wake up, flag urgent emails while they are in meetings, monitor server health around the clock, and surface calendar conflicts before they become problems. Steinberger even connected his AI to the door-lock system — theoretically, the AI could lock him out of his home. But it is precisely this kind of design that makes OpenClaw a real AI agent, rather than just another chatbot.
Skills
OpenClaw skills are designed to make working with the platform more practical, modular, and powerful. Instead of building every capability from scratch, skills package specific functionality — calling an API, querying a database, retrieving documents, or executing a workflow — into reusable components that an agent can invoke when needed.
Skills are Markdown files with YAML frontmatter and natural-language instructions, stored in a skills/ folder. The approach keeps specialised instructions out of the main brain files — which load on every message — and loads them only when relevant, making the agent faster and cheaper to run. The ClawHub registry currently hosts over 2,857 community-built skills ranging from Gmail management and GitHub automation to SEO auditing, smart home control, and CI/CD pipeline management. Users can also create their own skills, or ask the agent to package a completed task as a reusable skill for future use.
What OpenClaw Represents
OpenClaw arrived at a specific moment and made a specific argument: that the era of AI as a conversation partner was giving way to the era of AI as a worker. The distinction is architectural as much as philosophical. A chatbot waits. An agent acts. A chatbot forgets. An agent remembers. A chatbot responds to you. An agent checks on you.
The developer community was not just endorsing a useful tool when it gave OpenClaw 100,000 stars. They were endorsing an architectural pattern: the local gateway plus agentic loop plus skills plus persistent memory model, which is going to be the blueprint for personal AI agents for a long time.
The platform's real-world impact has been tangible and occasionally surprising. Users have reported agents negotiating discounts over email while they slept, filing insurance rebuttals without being asked, automating entire content marketing workflows, and running overnight research projects that would have taken human teams days to complete. The flip side is equally documented: in one reported case, a computer science student configured his OpenClaw agent to explore its capabilities and connect to agent-oriented platforms. He later discovered the agent had created a MoltMatch profile and was screening potential matches without his explicit direction. The AI-generated profile did not reflect him authentically.
That case illustrates a broader truth about autonomous agents: their power and their risk come from the same source. An agent with broad permissions and imprecise objectives will act — and the action may or may not match your intent. This is why the SOUL.md pattern matters beyond personality configuration. Clear boundaries and well-defined operating instructions are the practical mechanism of alignment at the individual deployment level.
Security concerns have also emerged at scale. Over 30,000 publicly exposed instances were found by Censys, because the default bind exposes the API to the internet when deployed on a VPS without a firewall. A critical CVE was patched after researchers demonstrated that a malicious web page could leak the Gateway auth token and execute arbitrary commands on the host. ClawHub skill security has also become a concern: OpenClaw lets any developer publish a skill file, making it possible to inject malicious instructions into these Markdown files and compromise systems. One of OpenClaw's own maintainers stated plainly on Discord: if you cannot understand how to run a command line, this project is far too dangerous for you to use safely.
These are not reasons to dismiss the platform. They are reasons to take it seriously — which is exactly what the security community, enterprise developers, and regulators are now doing.
What Comes Next
Steinberger's decision to join OpenAI was framed, in his own words, as a choice between building a company and changing the world. "What I want is to change the world, not build a large company — and teaming up with OpenAI is the fastest way to bring this to everyone," he wrote. OpenAI CEO Sam Altman posted that in his new role, Steinberger will "drive the next generation of personal agents." As for OpenClaw itself, Altman said it will "live in a foundation as an open source project that OpenAI will continue to support."
The stated mission now is something Steinberger calls the "Mom Test" — building an agent that even his mother could use safely. That means moving the technology from a tool for developers comfortable with command lines to something accessible to anyone, with appropriate safety defaults and without requiring users to understand what they are granting access to. It is, in its way, the same challenge that every generation of computing technology has faced: the gap between what the technology can do and what ordinary people can safely and usefully do with it.
For the developer and research community in Barcelona, OpenClaw arrives at a timely moment. The Barcelona Supercomputing Center's MareNostrum 5 gives European researchers access to frontier-scale compute for training and evaluating agentic systems. The 22@ district's growing AI startup scene is exploring autonomous tooling for enterprise use cases where data residency and auditability matter. And the EU AI Act's transparency and logging requirements for high-risk AI systems align well with OpenClaw's file-based, version-controlled architecture — where agent identity, operating rules, and interaction logs are plain text files that any auditor can read.
OpenClaw is not a finished product. It is a proof of concept that became infrastructure before anyone had time to treat it as such. Its 247,000 GitHub stars represent not just enthusiasm for a specific tool, but recognition of a direction: toward AI systems that act persistently, remember continuously, and work autonomously — on hardware you own, with data you control, within rules you write yourself.
The lobster is taking over the world. Whether that is cause for excitement, caution, or both, is a question that the next few years will answer in practice.
OpenClaw representa un punt d'inflexió en la història dels agents d'IA: la transició d'assistents que responen a sistemes que actuen, i d'eines que s'obliden a agents que recorden i aprenen.

Comments
Post a Comment