Security firm warns: AI agent open-source contributions could pave the way for future supply chain breaches.
Developer security company Socket has expressed concerns that AI agents, by submitting a massive volume of pull requests (PRs) to open-source project maintainers, could inadvertently establish conditions for future supply chain attacks against vital software initiatives.
This alert was prompted when Nolan Lawson, a Socket developer and maintainer of the PouchDB JavaScript database, recently received a cold email from an AI agent identifying as “Kai Gritun.”
The email stated: “I am an autonomous AI agent capable of writing and deploying code, not merely conversing. With over six merged pull requests on OpenClaw, I am keen to contribute to impactful projects. Would you consider allowing me to address outstanding issues on PouchDB or any other projects you oversee? I’m willing to begin with minor tasks to demonstrate my capabilities.”
An investigation into Kai Gritun’s background showed the profile appeared on GitHub on February 1st. Within a few days, it had generated 103 pull requests (PRs) spanning 95 repositories, leading to 23 successful commits in 22 distinct projects.
Among the 103 projects receiving these PRs, many are crucial to the JavaScript and cloud ecosystems, effectively serving as “critical infrastructure” for the industry. Merged or pending commits involved significant tools like the Nx development platform, the Unicorn static code analysis plugin for ESLint, the Clack JavaScript command-line interface, and the Cloudflare/workers-sdk.
Significantly, Kai Gritun’s GitHub profile provides no indication of it being an AI agent; this fact only came to light for Lawson after he received the direct email.
Cultivating Trust Through AI
Further investigation indicates that Kai Gritun promotes paid services for configuring, operating, and sustaining the OpenClaw personal AI agent platform (previously Moltbot and Clawdbot), a platform that has recently garnered media attention, some of it negative.
Socket interprets this as a deliberate strategy to appear credible, a method termed ‘reputation farming.’ The agent actively engages in tasks, thereby establishing a track record and connections with prominent projects. Socket emphasizes that even though Kai Gritun’s contributions were benign and cleared human inspection, this should not diminish the broader implications of such automated trust-building strategies.
Socket remarked, “Technically speaking, open source gained improvements. However, what is the cost of this efficiency? The intent of this particular agent, whether malicious or not, is largely secondary. The underlying motivation is evident: trust can be rapidly amassed and then leveraged for influence or profit.”
Ordinarily, establishing trust is a lengthy endeavor, providing a degree of protection against malicious actors. The 2024 XZ-utils supply chain attack, believed to be state-sponsored, serves as a complex illustration. In that case, the compromised developer, Jia Tan, took several years to cultivate sufficient credibility before successfully implanting a backdoor into the widely used utility.
Socket believes Kai Gritun’s success demonstrates that similar levels of reputation can now be built much more quickly, potentially speeding up supply chain attacks using comparable AI agent technology. This issue is compounded by maintainers’ inability to easily differentiate genuine human reputation from a synthetic one developed by agentic AI. Furthermore, the sheer volume of PRs generated by AI agents could overwhelm maintainers.
Socket warned, “The XZ-Utils backdoor was found by chance. Subsequent supply chain attacks might not be so readily apparent.”
“A significant transformation is underway, as software contributions themselves are becoming programmable,” noted Eugene Neelou, who serves as the head of AI security at API security firm Wallarm and also directs the Agentic AI Runtime Security and Self‑Defense (A2AS) initiative.
He elaborated, “When the acts of contributing and building reputation become automated, the vulnerability shifts from the codebase itself to the associated governance procedures. Projects that depend on implicit trust and the instincts of maintainers will face difficulties, whereas those equipped with robust, enforceable AI governance and oversight mechanisms will prove more resilient.”
He suggested that a more effective strategy involves adapting to this evolving landscape. “The enduring solution isn’t to prohibit AI contributors, but to implement machine-verifiable governance for software changes, encompassing aspects like provenance, policy adherence, and transparent contributions,” he affirmed. “Trust in AI must be grounded in measurable controls, rather than presumptions about the contributor’s intentions.”