Microsoft has turned up the legal heat on the Pentagon by filing a court brief in a San Francisco federal court as Anthropic’s battle over AI safety restrictions reaches a critical stage. The brief called for a temporary restraining order against the Defense Department’s supply-chain risk designation and argued that allowing the designation to stand would cause immediate harm to the technology networks supporting national defense. Amazon, Google, Apple, and OpenAI have also filed in support of Anthropic, intensifying the legal pressure on the government.
Anthropic’s battle reached a critical stage when the Pentagon formally notified the company of its supply-chain risk designation following the breakdown of a $200 million contract negotiation. Anthropic had refused to allow its Claude AI to be used for mass surveillance or autonomous lethal weapons, prompting Defense Secretary Pete Hegseth to apply the designation. The company responded with two simultaneous lawsuits in California and Washington DC.
Microsoft’s decision to turn up the legal heat is grounded in its direct integration of Anthropic’s AI into military systems and its partnership in the Pentagon’s $9 billion cloud computing contract. Additional federal agreements spanning defense, intelligence, and civilian agencies deepen the company’s stake in this dispute. Microsoft publicly called for a collaborative approach in which the government and industry define responsible standards for AI in national security.
Anthropic’s court filings argued that the supply-chain risk designation was an unconstitutional act of retaliation for publicly advocating responsible AI development, violating its First Amendment rights. The company disclosed that it does not believe Claude is currently safe or reliable enough for lethal autonomous operations. The Pentagon’s technology chief publicly ruled out renegotiation, escalating the confrontation.
Congressional Democrats have separately written to the Pentagon demanding information about whether AI was used in a strike in Iran that reportedly killed over 175 civilians at a school. Their formal inquiries ask about AI targeting tools and human oversight processes. The combination of Microsoft’s escalating legal pressure, the industry coalition, and congressional scrutiny is bringing Anthropic’s AI safety battle to a genuinely critical stage in the nation’s courts and legislature.