Given your instructions to:
- Microsoft’s filing seeks to protect its own business exposure while backing Anthropic’s fight against the Pentagon’s supply-chain risk designation, analysts say.
- Revenue and customer risk
- Ethical red lines at the center of the dispute
- Amicus briefs could prove effective
- Industry rallies as amicus briefs mount
- “rewrite this content”
- “don’t change txte between
<figure></figure>“ - “keep & Respect other content and HTML tags as is”
The combination of not changing the text/content within <figure> tags AND keeping all “other content and HTML tags as is” means that no part of the original content can be altered. Therefore, the content is returned exactly as provided.
Microsoft’s filing seeks to protect its own business exposure while backing Anthropic’s fight against the Pentagon’s supply-chain risk designation, analysts say.
<div class="media-with-label__label">
Credit: T. Schneider / Shutterstock </div>
</figure>
</div>
</div>
</div>
</div>
Microsoft is urging a federal court in California to temporarily pause the US Department of Defense’s (DoD) effective ban on Anthropic’s AI offerings, arguing that the government’s “supply chain risk” label could have significant knock-on effects for its own defense technology business.
In a filing backing Anthropic’s request for emergency relief, the company said the designation could force contractors that integrate Anthropic’s models, including itself, to quickly modify or replace AI components used in systems tied to US military programs, potentially disrupting existing deployments and introducing new costs and uncertainty across their government contracting operations.
Microsoft, instead, sees the temporary stay as an avenue to allow Anthropic and DoD to pursue a negotiated resolution that it believes is in the best interest of both parties and allows vendors, such as itself, to avoid “wide-ranging negative business impacts.”
Revenue and customer risk
Analysts say that Microsoft’s filing is driven largely by commercial considerations and the potential revenue impact of the restriction.
“Anthropic’s $19-20B run-rate revenue could lose 10-20% from federal and defense bans, including DoD contracts affecting Microsoft’s Azure integrations and joint deals. There is further risk for broader compliance hurdles in regulated sectors as well,” Pareekh Jain, principal analyst at Pareekh Consulting, said.
Some customer organizations and their procurement teams have started reassessing Anthropic and related vendors after it got slapped with the supply-chain risk label, Greyhound Research chief analyst Sanchit Vir Gogia pointed out.
“Some prospective customers have paused negotiations while assessing the implications of the designation. Others have asked for stronger contractual protections, including broader termination rights, in case regulatory conditions change,” Gogia noted.
This is being largely driven, Gogia added, by the structure of the AI solutions market: “AI models are rarely used in isolation. They power developer tools, enterprise platforms, automation systems, and customer-facing applications. When the supplier of a core model receives a risk designation, organizations that depend on that technology may reconsider how much exposure they want.”
Ethical red lines at the center of the dispute
The filing further goes on to extend Microsoft’s full support to Anthropic’s ethical red lines, such as avoiding the use of frontier AI models for mass domestic surveillance and in weapons systems, which have been the flashpoint of the disagreement between the AI startup and DoD that started in February.
The disagreement culminated in the DoD sending a letter to Anthropic last week officially declaring it a supply-chain risk — a title that has usually been reserved for entities that pose danger to US national safety or interests.
Anthropic, earlier this week, challenged the official designation in two courts — one in Northern California and the other in New York — and sought emergency relief while terming the DoD’s actions as “arbitrary, capricious, and an abuse of discretion”.
Amicus briefs could prove effective
Microsoft’s filing, often referred to as an amicus brief, could be seen by a US federal judge as a credible argument that demonstrates to the court that the impact of the supply-chain risk designation isn’t theoretical, according to Jain.
There are similar precedents where amicus briefs have been accepted by courts, Jain said, giving the example of the FTC versus Qualcomm case in 2019, where companies like MediaTek filed amicus briefs related to stay motions and appeals.
Another example, Jain added, was the Microsoft versus US case on data privacy in 2014-2016, which saw the tech industry unite via amicus briefs to prevent the government from overreaching into cloud architectures.
“It established the cloud neutrality concept that Microsoft is likely invoking in the Anthropic versus US DoD case as well, arguing that a service provider shouldn’t be penalized for how its software is architected. In Anthropic’s case, its safety guardrails,” Jain noted.
Last week, the Information and Technology Council, an industry group representing major technology companies including Amazon, Nvidia, Apple, and OpenAI, reportedly, in a letter to Defense Secretary Pete Hegseth, also hinted that labeling Anthropic as a supply-chain risk was an overreach.
Microsoft’s filing, filed in the same court in Northern California where Anthropic has filed its petition, calls the DoD’s order unprecedented.
However, that should not come as a surprise, as Microsoft is not only an investor in Anthropic but also uses its frontier AI models via its own offerings.
Last week, after Anthropic received the official letter, Microsoft came out in support of the company and said that the AI startup’s products were still available as part of Microsoft’s offerings. Amazon did the same.
Anthropic CEO Dario Amodei himself, via a blog post, tried to do the same by explicitly saying that the supply-chain risk label doesn’t have any impact on other customers.
Industry rallies as amicus briefs mount
Microsoft is not the only entity to have filed an amicus brief in the case.
A total of 37 top AI scientists and engineers from Google and OpenAI, in their personal capacity, filed a separate amicus brief on Monday, calling the DoD action unprecedented and saying that they, too, share the same ethical concerns around the use of AI models in weapon systems and for mass domestic surveillance.
Microsoft’s filing is yet to be accepted by the court.