Criminals are building an AI black market.

Howard Solomon
9 Min Read

Pillar Security researchers reveal how threat actors are profiting from unsecured LLMs and MCP endpoints. CSOs: here’s how to mitigate the risk.

Computer hacker silhouette of hooded, man using laptop and monitor for steal data. Cyber crime concept.
Credit: xalien / Shutterstock
 

For years, CSOs have been vigilant about unauthorized cryptomining on their IT infrastructure. Now, according to new research, they must also contend with criminals hijacking and reselling access to exposed corporate AI resources.

In a report released on Wednesday, Pillar Security researchers unveiled widespread campaigns targeting vulnerable large language model (LLM) and Model Context Protocol (MCP) endpoints—such as an AI-driven customer support chatbot on a website.

“It’s truly concerning,” stated Ariel Fogel, a co-author of the report. “We’ve uncovered an active criminal enterprise where individuals are attempting to steal your credentials, exploit your LLM access and computational resources, and then resell them.”

“Rapid action to block this type of threat is essential, depending on your application,” added co-author Eilon Cohen. “Ultimately, you don’t want your valuable resources being misused by others. If you’re deploying systems that access critical assets, you need to respond immediately.”

Kellman Meghu, CTO at Canadian incident response firm DeepCove Security, warned that this campaign “is poised to escalate to catastrophic levels. The most alarming aspect is the minimal technical expertise required for exploitation.”

The scale of these campaigns is significant: the researchers’ honeypots alone detected 35,000 attack sessions hunting for exposed AI infrastructure within just a few weeks.

 

“This is not an isolated incident,” Fogel added. “It’s a genuine business operation.” He suspects the campaigns are managed by a small group, not a nation-state.

The attackers aim to: steal compute resources for unauthorized LLM inference requests, resell API access at discounted rates via criminal markets, exfiltrate data from LLM context windows and conversation history, and pivot to internal systems through compromised MCP servers.

Two Active Campaigns

The researchers have pinpointed two distinct campaigns: “Operation Bizarre Bazaar,” which targets unprotected LLMs, and another campaign specifically focused on Model Context Protocol (MCP) endpoints.

Locating these exposed endpoints isn’t difficult. The threat actors exploit common tools like the Shodan and Censys IP search engines.

Organizations at risk include those running self-hosted LLM infrastructure (such as Ollama, a software for processing LLM requests; vLLM, a high-performance alternative to Ollama; and various local AI implementations) or those deploying MCP servers for AI integrations.

Specific targets encompass:

 
  • exposed endpoints on default ports of popular LLM inference services;
  • unauthenticated API access lacking proper controls;
  • development/staging environments with public IP addresses;
  • MCP servers connecting LLMs to file systems, databases, and internal APIs.

Common misconfigurations exploited by these threat actors include:

  • Ollama instances operating on port 11434 without authentication;
  • OpenAI-compatible APIs on port 8000 exposed to the public internet;
  • MCP servers accessible without proper access controls;
  • development/staging AI infrastructure with publicly exposed IPs;
  • production chatbot endpoints (e.g., customer support, sales bots) lacking authentication or rate limiting.

George Gerchow, Chief Security Officer at Bedrock Data, commented that Operation Bizarre Bazaar “clearly indicates that attackers have moved beyond opportunistic LLM misuse and now view exposed AI infrastructure as a valuable, monetizable attack surface. The concern extends beyond unauthorized compute usage to the fact that many of these endpoints are now linked to the Model Context Protocol (MCP), a new open standard for securely connecting LLMs to data sources and tools. MCP offers powerful real-time context and autonomous capabilities, but without robust controls, these integration points become critical pivot vectors into internal systems.”

Defenders must secure AI services with the same rigor applied to APIs or databases, he emphasized, prioritizing authentication, telemetry, and threat modeling early in the development lifecycle. “As MCP becomes fundamental to modern AI integrations, securing these protocol interfaces—not just model access—must be a top priority,” he concluded.

In an interview, Pillar Security report authors Eilon Cohen and Ariel Fogel could not provide an estimate for the revenue generated by threat actors thus far. However, they stressed the urgency for CSOs and infosec leaders to act quickly, especially if an LLM is connected to sensitive data.

Their report detailed three key components of the Bizarre Bazaar campaign:

  • the scanner: A distributed bot network systematically probes the internet for exposed AI endpoints, cataloging every unprotected Ollama instance, unauthenticated vLLM server, and accessible MCP endpoint. Exploitation attempts typically follow within hours of detection;
  • the validator: Once scanners identify targets, infrastructure linked to an alleged criminal site verifies endpoint validity through API testing. During a focused operational period, attackers tested placeholder API keys, enumerated model capabilities, and assessed response quality;
  • the marketplace: Discounted access to over 30 LLM providers is sold on a platform called The Unified LLM API Gateway. This site is hosted on robust infrastructure in the Netherlands and promoted via Discord and Telegram.

The researchers noted that current buyers of this illicit access appear to be individuals building their own AI infrastructure, seeking cost savings, and those involved in online gaming.

Threat actors aren’t just stealing AI access from fully developed applications, the researchers added. Even a developer carelessly prototyping an app without securing a server could fall victim to credential theft.

Joseph Steinberg, a US-based AI and cybersecurity expert, commented that the report underscores how emerging technologies like artificial intelligence introduce new risks and necessitate novel security solutions beyond conventional IT controls.

CSOs must evaluate if their organization possesses the necessary expertise to safely deploy and protect an AI project, or if outsourcing to a specialized provider is a more prudent approach.

Mitigation Strategies

Pillar Security advises CSOs managing externally-facing LLMs and MCP servers to take the following steps:

 
  • Enable authentication on all LLM endpoints. Implementing mandatory authentication thwarts opportunistic attacks. Organizations should confirm that Ollama, vLLM, and similar services demand valid credentials for all requests;
  • Audit MCP server exposure. MCP servers must never be directly accessible from the internet. Verify firewall rules, review cloud security groups, and confirm authentication requirements;
  • Block known malicious infrastructure. Add the 204.76.203.0/24 subnet to your deny lists. For the MCP reconnaissance campaign, block AS135377 ranges;
  • Implement rate limiting. Deploy rate limiting to prevent rapid exploitation attempts. Utilize WAF/CDN rules tailored for AI-specific traffic patterns;
  • Audit production chatbot exposure. Every customer-facing chatbot, sales assistant, and internal AI agent must incorporate robust security controls to prevent misuse.

Don’t Capitulate

Despite numerous recent reports on AI vulnerabilities, Meghu emphasized that abandoning AI is not the solution. Instead, organizations should impose stringent controls on its use. “Don’t simply ban it; instead, shed light on it and help your users grasp the risks, while also developing safe methods for them to leverage AI/LLM for business benefit,” he advised.

“It’s likely time for dedicated training on AI usage and associated risks,” he added. “Ensure you gather feedback from users on how they wish to interact with AI services and proactively support their needs. Banning it outright pushes users into shadow IT, and the potential consequences are too severe to risk people concealing their AI activities. Embrace it and integrate it into your communications and planning with employees.”

This article was originally published on CSOonline.

Artificial IntelligenceCybercrimeSecurityVulnerabilities
Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *