6 results found

In a significant move that reverberated through the tech industry, artificial intelligence company Anthropic filed a lawsuit against the Trump administration on Monday, March 9, 2026. The lawsuit, lodged in a federal

AI firm Anthropic plans to challenge the DOD's recent "supply chain risk" designation in court, calling it "legally unsound." This follows a dispute over AI control, with Anthropic refusing use for mass surveillance or autonomous weapons, while the Pentagon seeks unrestricted access for lawful purposes. The designation could bar Anthropic from military contracts.

Pentagon Labels Anthropic a Supply-Chain Risk, Sparking Legal Battle The Defense Department has formally designated American AI firm Anthropic as a "supply-chain risk," escalating a weeks-long dispute over the company's

Anthropic's Claude is being used by the Pentagon for critical intelligence and battle simulations, sparking controversy over its "red lines" for military use, even as consumer popularity soars.

The Pentagon has designated AI developer Anthropic as a "Supply-Chain Risk to National Security" after the company refused to allow its AI for mass domestic surveillance or autonomous weapons. This follows President Trump's directive to cease federal use of Anthropic products, which the company vows to challenge legally. OpenAI, initially supporting Anthropic's stance, swiftly secured a deal with the Pentagon to fill the void, claiming to uphold similar ethical principles.
The Pentagon is demanding access to Anthropic's AI technology and threatening to invoke the Defense Production Act if the company does not comply, according to Washington Post Technology. This move highlights escalating government interest in private sector AI for national security and poses a significant challenge for the tech firm, bringing to the forefront issues of forced tech sharing and government authority.