7 results found

President Trump banned federal agencies from using Anthropic's AI tools, citing the company's refusal to lift restrictions on military use. This clash over "all lawful use" versus Anthropic's ethical red lines (lethal autonomous weapons, mass surveillance) creates disruption for agencies and sets a precedent for AI ethics in government contracts.

The Pentagon has designated AI developer Anthropic as a "Supply-Chain Risk to National Security" after the company refused to allow its AI for mass domestic surveillance or autonomous weapons. This follows President Trump's directive to cease federal use of Anthropic products, which the company vows to challenge legally. OpenAI, initially supporting Anthropic's stance, swiftly secured a deal with the Pentagon to fill the void, claiming to uphold similar ethical principles.
The Pentagon is demanding access to Anthropic's AI technology and threatening to invoke the Defense Production Act if the company does not comply, according to Washington Post Technology. This move highlights escalating government interest in private sector AI for national security and poses a significant challenge for the tech firm, bringing to the forefront issues of forced tech sharing and government authority.

IBM experienced a $40 billion stock drop after Anthropic unveiled AI tools for COBOL translation. However, industry experts and IBM argue that this reaction stems from a misunderstanding: translating COBOL code is distinct from comprehensive mainframe modernization, which involves complex architectural redesign and ensuring critical system reliability. Enterprises are advised to approach new AI tools with caution, conducting pilots to assess actual ROI for modernization efforts.

Anthropic has accused Chinese AI labs DeepSeek, Moonshot AI, and MiniMax of \

Creator Economy Shifts Beyond Ads, India's AI in Focus The rapidly evolving creator economy is seeing a significant pivot away from traditional ad revenue, with leading creators establishing diverse business empires.
This article addresses the AI beliefs of Anthropic and C.E.O., Dario Amodei, strictly based on the provided source information. It clarifies that the source, limited to "nytimes.com" and "NYT Technology," offers no specific details on this topic. All sections acknowledge the absence of direct information while contextualizing the source as a general news platform.