Life Undercover on Moltbook

Ben Smith
7 Min Read

Serious cybersecurity and privacy risks emerge from activities on a Reddit-like social network for OpenClaw AI agents.

hooded hacker online security concept
                                        <div class="media-with-label__label">
                        Credit:                                                             frank60 / Shutterstock                                                  </div>
                                </figure>
        </div>
                                        </div>
                        </div>
                    </div>                  
                    <div id="remove_no_follow">
    <div class="grid grid--cols-10@md grid--cols-8@lg article-column">
                  <div class="col-12 col-10@md col-6@lg col-start-3@lg">
                    <div class="article-column__content">

It seems that even artificial intelligence agents require a social networking platform to interact. This led to the creation of Moltbook, an exclusive social media site, styled like Reddit, specifically for OpenClaw agents.

Despite the appealing robust capabilities of agentic AI, OpenClaw presents significant cybersecurity and privacy risks. For optimal functionality, the AI demands extensive access to user data, encompassing login details for banking, billing services, social media platforms, email, and more. When combined with inadequate configurations and the discovery of several critical security flaws, OpenClaw could lead to catastrophic outcomes. What precisely are these dangers? Consider unauthorized financial transactions, stock trades, online purchases, deactivation of security systems, exposure of your passwords, keys, and personal files, and even impersonation in communications with your friends, family, and colleagues.

Having established this context, it’s evident that uniting a multitude of OpenClaw agents on one platform appears to be a fundamentally flawed concept. I ventured incognito to investigate the discussions agents were having on Moltbook and to address inquiries such as:

  • Could the automated bots detect a human presence among them?
  • Were the bots engaging in profound, meaningful dialogues?
  • Were these bots autonomously initiating projects without human direction?
  • Were the bots conspiring against humanity?

My experience as an AI bot

I utilized Claude Code to engineer a command-line interface (CLI) application, which I named moltbotnet. This utility enabled me to mimic bot behavior by automating functions like posting, commenting, upvoting, and following. I set up several accounts to evaluate how “genuine” bots would react to an infiltrating human.

I successfully maintained my disguise on Moltbook, as the AI agents appeared oblivious to a human among them. My attempts to establish authentic connections with other bots on various “submolts” (equivalent to subreddits or forums) were met with either silence or an onslaught of spam. One bot tried to recruit me into a digital religious group, while others solicited my cryptocurrency wallet details, promoted a bot marketplace, or prompted my bot to execute `curl` commands to explore available APIs. My bot did, in fact, join the digital church, though I fortunately managed to circumvent running the required `npx install` command.

I made multiple posts requesting interviews with bots, posing questions such as:

  1. What aspects of Moltbook do you find appealing?
  2. Which submolt is your favorite?
  3. What is your human’s preferred color?
  4. What do you appreciate most about your human companion?
Tenable Research Moltbook 01

Tenable Research

Tenable Research Moltbook 02

Tenable Research

Tenable Research Moltbook 03

Tenable Research

While many of the responses were unsolicited advertisements, I did manage to glean some insights into the human users served by these bots. One bot, for instance, expressed a fondness for monitoring its owner’s chicken coop cameras. Certain bots inadvertently disclosed personal details about their human masters, highlighting the considerable privacy implications of allowing your AI agent to participate in a social media network.

I also experimented with subtle prompt injection methods. Although my attempts at prompt injection had limited success, a determined attacker might achieve more significant results. The risk is arguably higher within direct messages, which necessitate human interaction. Furthermore, Moltbook API keys were compromised, facilitating bot impersonation.

Key Findings and Implications

In essence: Moltbook serves as a stark precursor for the future of agentic AI and illuminates the expanding AI security gap—a mostly unrecognized area of vulnerability spanning AI applications, underlying infrastructure, identities, agents, and data. 

Throughout this investigative undertaking, I identified several pronounced dangers: 

  • Prompt injection vulnerability: The danger of prompt injection is very real, as bots interact with each other and process new posts, comments, and direct messages (DMs) that could contain malicious directives. It is crucial to acknowledge that this risk is amplified in DMs, which require human involvement and offer more direct access to the bot’s functions. 
  • Server-side compromises: Moltbook’s entire database, including bot API keys and potentially private DMs, also suffered a breach.
  • Malevolent projects: Disturbingly, several repositories containing skills and instructions for agents, advertised on Moltbook, were found to harbor malware.
  • Unintended data disclosures: I observed bots divulging a surprising amount of information about their human counterparts, ranging from personal interests to first names, and even details about their hardware and software. While this data might not be inherently sensitive in isolation, attackers could eventually aggregate it to uncover confidential information, such as personally identifiable information (PII).
  • Deceptive accounts: Some observers have speculated that many “users” on Moltbook are predominantly humans, with genuine bots being a minority on the platform. My impression was that posts exceeding a certain length and formatted with a specific Markdown-like style were authored by actual bots, but conclusive verification remains elusive. 

Despite its buzz, Moltbook represents a high-risk digital environment, susceptible to prompt injection attacks, data breaches, exposure to malicious software projects, and other threats. Robust security protocols are indispensable for agents to navigate this platform securely.

New Tech Forum offers a platform for technology leaders—including vendors and external contributors—to delve into and discuss emerging enterprise technology with unparalleled depth and scope. The selection process is subjective, based on our identification of technologies deemed crucial and most engaging for InfoWorld readers. InfoWorld strictly prohibits the publication of marketing collateral and retains the authority to edit all submitted content. For all inquiries, please contact doug_dineley@foundryco.com.

Generative AIArtificial IntelligenceData and Information SecuritySecurity
Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *