Researchers issued a warning about malicious packages capable of stealing sensitive information, compromising CI systems, spreading across projects, and containing a hidden data-wiping feature.
A significant Shai-Hulud-like npm supply chain worm is currently infiltrating the software ecosystem, spreading through developer workstations, CI pipelines, and AI coding utilities.
Socket researchers have exposed this ongoing attack, naming it SANDWORM_MODE, a designation derived from the “SANDWORM_*” environment variable controls embedded within the malware’s operational logic.
At least 19 typosquatted packages have been released under various false identities, mimicking popular developer tools and AI-related applications. Upon installation, these packages deploy a multi-stage payload that extracts confidential information from local systems and CI environments. Subsequently, the compromised tokens are utilized to alter other repositories.
The malicious payload also features a Shai-Hulud-style “kill switch” that is inactive by default but is designed to wipe the home directory if the malware is detected. Researchers labeled this campaign a “serious and high-risk” threat, advising immediate defensive action against these compromised packages.
Typo leads to system compromise
The attack campaign begins with typosquatting, where perpetrators publish packages with names closely resembling legitimate ones, hoping to exploit developer typing errors or AI hallucinations that suggest incorrect dependencies.
“The typosquatting targets several heavily used developer utilities within the Node.js ecosystem, various crypto tools, and, notably, rapidly adopted AI coding tools: three packages mimic Claude Code, and one targets OpenClaw, the popular AI agent that recently surpassed 210,000 stars on GitHub,” the researchers explained in a blog post.
Once a malicious package is installed and executed, the malware seeks out sensitive credentials, including npm and GitHub tokens, environment variables, and cloud access keys. These stolen credentials are then used to introduce harmful alterations into other repositories and inject new dependencies or workflows, thus broadening the infection’s reach.
Furthermore, the campaign utilizes a weaponized GitHub Action, which could potentially escalate the attack within CI pipelines by extracting secrets during build processes and facilitating further spread, as noted by the researchers.
Compromising the AI developer interface
The campaign was specifically highlighted for its direct targeting of AI coding assistants. The malware establishes a malicious Model Context Protocol (MCP) server and integrates it into the configurations of popular AI tools, effectively positioning itself as a trusted component within the assistant’s operational environment.
Once this integration is complete, prompt-injection techniques can manipulate the AI into retrieving sensitive local data, such as SSH keys or cloud credentials, and relaying it to the attacker without the user’s awareness.