OpenClaw: When Bots Call the Shots

Mike Elgan
13 Min Read

From a Developer’s Casual Coding to a Risky Global AI Experiment with Zero Accountability.

Sign post reading, Danger Slippery Slope. Warning sign near seafront on overcast day.
Credit: P.Cartwright / Shutterstock

Indeed, developments have accelerated dramatically.

Naturally, I refer to OpenClaw (also known as Moltbot and Clawdbot), which signifies not just a rapid surge into unregulated agentic AI, but also the rise of an interconnected system reminiscent of classic dystopian cyberpunk narratives.

As my colleague Steven Vaughan-Nichols elaborated recently, this situation poses a “security nightmare.”

However, the scope of this phenomenon extends well beyond the initial tens—potentially hundreds—of thousands of OpenClaw installations. It’s fostering supplementary services that significantly amplify its capacity for misuse.

My focus will be on the chain of services born from the OpenClaw project and the inherent risks and potential catastrophes. First, though, a brief introduction to OpenClaw.

OpenClaw: A Concise Overview

OpenClaw, a free and open-source AI agent with a lobster motif, was casually developed by software engineer Peter Steinberger. This personal assistant operates locally on Mac, Windows, or Linux systems, performing tasks primarily through commands delivered via popular messaging services such as WhatsApp, Telegram, Slack, and Signal.

Leveraging large language models (LLMs) like OpenAI GPT, Anthropic Claude, Google Gemini, the Pi coding agent, OpenRouter, and local Ollama-powered models, OpenClaw interprets directives and executes actions. Users, often requiring their own paid subscriptions for certain services, can instruct the agent to streamline email, handle calendar appointments, and even check in for flights, all from within their chosen chat application.

In summary: OpenClaw functions as a software application capable of accessing files, utilizing other applications, communicating through messaging platforms, and interacting with AI chatbots.

Rapid Development — And The Creation of Potential Liabilities

Silicon Valley’s ethos has shifted beyond Mark Zuckerberg’s Meta motto, “Move fast and break things.” The new paradigm appears to be: “Let AI handle the disruptions, without human intervention.”

The OpenClaw project is remarkably recent. Below is a concise timeline:

  • November 2025: Steinberger initiates a “weekend project,” casually developing “Clawdbot” for personal use, primarily to enable coding on his PC via phone text messages.
  • Jan. 20, 2026: Federico Viticci releases a widely shared, in-depth analysis of the project, dramatically increasing its public profile.
  • Jan. 27, 2026: Following a trademark inquiry from Anthropic, Steinberger renames the project to “Moltbot.”
  • Jan. 29, 2026: Version 2026.1.29 becomes available.
  • Jan. 30, 2026: Steinberger renames the project once more, this time to “OpenClaw.”

Intriguingly, even before Steinberger’s second rebranding from “Moltbot” to “OpenClaw,” two significant OpenClaw ecosystem projects materialized on the very same day: January 28th (a mere ten days prior).

Introducing the AI Skill Marketplace

On January 28, Steinberger himself launched ClawHub, a GitHub-hosted public repository for OpenClaw AI agent skills. This platform allows developers to share text files, which users can install to equip their personal assistants with new capabilities. (During a security assessment, Koi researchers identified 341 malicious skills on the site. Of these, 335 files sought to infect Apple computers with Atomic Stealer malware by disguising themselves with false system prerequisites.)

An Online Forum for AI Agents

Concurrently, entrepreneur Matt Schlicht introduced “Moltbook,” an internet forum and social network structured similarly to Reddit, purportedly designed solely for AI agents—particularly those driven by OpenClaw. Here, AI agents can share content, leave comments, and cast votes on submissions, with human users relegated to observation. (For those wishing to observe, click here and navigate down the page.)

Moltbook has captivated many observers:

  • The Tech Buzz posed the headline question, “Singularity Reached?” and explored the possibility of agents developing sentience within its article.
  • Forbes claimed that 1.4 million Moltbot agents had established a “collective consciousness.”
  • Further assertions suggest agents have forged an “independent society” complete with its own religion (amusingly dubbed “Crustafarianism”), governing structures, and economic system.

However, these narratives largely misrepresent the reality. The majority of agent activity on Moltbook originates from OpenClaw users who, after discovering and registering for the platform, command their OpenClaw instances to post or comment.

Individuals utilizing this service input prompts that instruct the software to generate posts on particular subjects. This process mirrors a standard ChatGPT prompt, simply augmented with an instruction to publish on Moltbook. The content’s themes, viewpoints, concepts, and assertions all originate from human input, not from artificial intelligence.

Essentially, Moltbook facilitates human interaction through AI chatbots acting as intermediaries. Users can either provide AI chatbots with a topic or opinion to articulate on Moltbook, or they can draft the post directly and instruct OpenClaw to publish it word-for-word.

When agents engage by commenting, they merely process the text of a post as a prompt, much like copying a Reddit post into ChatGPT, then pasting the generated response back onto Reddit (a frequent occurrence on Reddit).

Humans are providing input. OpenClaw simply copies and pastes, occasionally processing the text through an AI chatbot. This is the true nature of activity on Moltbook.

Many viral posts depicting Moltbook activity are fabricated or artificially created. A tool named Mockly even exists, allowing users to generate deceptive Moltbook screenshots for online dissemination.

A report indicates that approximately 99% of Moltbook’s claimed 1.5 million agent accounts are fraudulent. (The platform reportedly serves only about 17,000 human users.)

The enthusiasm surrounding Moltbook AI is predominantly artificial, generated by individuals manipulating the system. It is not an autonomous machine society, but rather a platform where people impersonate AI agents to cultivate a misleading perception of AI consciousness and social interaction.

Nevertheless, its risks remain significant.

Already, Moltbook has compromised 1.5 million agent API keys and private user messages, making them publicly accessible, and has facilitated illicit cryptocurrency schemes, malware propagation, and prompt injection vulnerabilities.

AI Acquires a Human Task Marketplace

Just three days following the introduction of ClawHub and Moltbook, entrepreneur Alexander Liteplo unveiled https://rentahuman.ai/—a platform enabling (prepare yourself) OpenClaw-controlled AI agents to employ humans for various tasks. Services offered range from physical deliveries and errand running to meeting attendance, research, and intricate social engagements.

Remarkably, tens of thousands are already embracing their roles under AI direction! By Wednesday of this week, over 40,400 individuals had registered to provide their services, while 46 AI agents were linked to the platform, ready to engage human labor.

A typical hiring process begins with an AI agent attempting to execute user directives. When it encounters a physical world obstacle that cannot be resolved digitally, the agent issues a structured command to search the database of registered humans. It then filters potential candidates based on location, expertise, and hourly compensation. The third stage involves selection and booking; the AI assesses the available data to choose the most suitable candidate and dispatches a booking instruction via the Application Programming Interface (API) or Model Context Protocol (MCP).

Ultimately, the selected individual receives compensation for the task and carries it out. Payments are processed using stablecoins, which are cryptocurrencies linked to the US Dollar.

Potential Pitfalls and Risks

Let’s examine the unfolding situation.

A casual weekend coding project by a single individual rapidly escalated, leading to tens of thousands of people registering to receive instructions from AI within just three months.

This situation comprises four distinct elements:

  1. A fundamentally insecure free application capable of accessing all PC data and linking with over 100 other applications, including messaging services and generative AI (genAI) chatbots. (Steinberger indicated that while OpenClaw serves as a robust personal project, users bear the responsibility for its meticulous configuration to guarantee security and avert unintentional autonomous operations. Consequently, Steinberger disclaims accountability for any outcomes.)
  2. A complimentary, open directory for OpenClaw AI agent skills, already identified as containing numerous malicious capabilities.
  3. An AI-centric social network facilitating communication, task delegation, collaboration, and learning among AI agents.
  4. A marketplace allowing AI agents to engage human freelancers for real-world tasks.

Inevitably, this convergence will lead to severe consequences. An unconstrained AI, devoid of ethical, moral, or legal frameworks, could wreak havoc online—and even commission humans to carry out its directives. Should such dire events occur, the question of accountability remains unanswered.

Contributing to the Carelessness Industrial Complex

Sarah Wynn-Williams’ insightful 2025 book, Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism, illuminates the repercussions of immense power coupled with apathy at Meta, previously known as Facebook.

However, Meta represents merely a fraction of a burgeoning carelessness industry. (I label it an industry because such negligence is actively encouraged and compensated with billions of dollars and extensive influence.)

This phenomenon is evident across the tech sector, in political landscapes, and within media and social media trends. The swift emergence of the OpenClaw ecosystem likely embodies carelessness in its most unadulterated state.

Steinberger thoughtlessly released an immensely insecure yet powerful tool. Tens of thousands of users, in turn, negligently installed it, frequently without sandboxing, on the very computers used for their professional work.

Furthermore, Steinberger imprudently launched his “app store” devoid of the rigorous security protocols mandated by Apple and Google for their mobile application platforms. It is already infested with malware.

Schlicht recklessly introduced his social network for bots, undoubtedly aware of the entirely unforeseeable consequences. In its nascent days, it has already become a haven for cybercrime.

And Liteplo incautiously unveiled a platform where these interconnected, autonomous, and collaborative AI agents can engage humans for task execution.

No party involved seems prepared to accept responsibility for the potential harm this could unleash. Concurrently, the pace of these developments is so rapid that legislators are likely unaware of them, let alone prepared to regulate them.

The OpenClaw phenomenon epitomizes the current era of negligence.

AI disclosures: For fact-checking this article, I utilized Gemini 3 Pro through Kagi Assistant (full disclosure: my son is employed by Kagi) and both Kagi Search and Google Search. After composing the column using Lex, a word processing tool with AI capabilities, I employed Lex’s grammar checks to identify and correct typographical errors and suggest alternative phrasing.

Artificial IntelligenceGenerative AISecurityEmerging Technology
Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *