13 results found

The Model Context Protocol (MCP), an open-source standard launched by Anthropic in late 2024, is rapidly gaining traction as the core communication method for AI agents. It provides a flexible framework for agents to interact with external data and users, distinct from traditional APIs that are designed for deterministic developer-driven tasks. With major adoption by OpenAI and Google, MCP is shaping the future of autonomous AI workflows.
Meta has postponed the release of its new foundational artificial intelligence model, code-named Avocado, from March to at least May 2026. The delay stems from internal tests indicating the model underperformed compared to leading A.I. models developed by rivals such as Google, OpenAI, and Anthropic. This setback comes despite Meta's substantial investment in the competitive A.I. landscape.

In a significant move that reverberated through the tech industry, artificial intelligence company Anthropic filed a lawsuit against the Trump administration on Monday, March 9, 2026. The lawsuit, lodged in a federal

AI firm Anthropic plans to challenge the DOD's recent "supply chain risk" designation in court, calling it "legally unsound." This follows a dispute over AI control, with Anthropic refusing use for mass surveillance or autonomous weapons, while the Pentagon seeks unrestricted access for lawful purposes. The designation could bar Anthropic from military contracts.

Pentagon Labels Anthropic a Supply-Chain Risk, Sparking Legal Battle The Defense Department has formally designated American AI firm Anthropic as a "supply-chain risk," escalating a weeks-long dispute over the company's

Anthropic's Claude is being used by the Pentagon for critical intelligence and battle simulations, sparking controversy over its "red lines" for military use, even as consumer popularity soars.

President Trump banned federal agencies from using Anthropic's AI tools, citing the company's refusal to lift restrictions on military use. This clash over "all lawful use" versus Anthropic's ethical red lines (lethal autonomous weapons, mass surveillance) creates disruption for agencies and sets a precedent for AI ethics in government contracts.

The Pentagon has designated AI developer Anthropic as a "Supply-Chain Risk to National Security" after the company refused to allow its AI for mass domestic surveillance or autonomous weapons. This follows President Trump's directive to cease federal use of Anthropic products, which the company vows to challenge legally. OpenAI, initially supporting Anthropic's stance, swiftly secured a deal with the Pentagon to fill the void, claiming to uphold similar ethical principles.
The Pentagon is demanding access to Anthropic's AI technology and threatening to invoke the Defense Production Act if the company does not comply, according to Washington Post Technology. This move highlights escalating government interest in private sector AI for national security and poses a significant challenge for the tech firm, bringing to the forefront issues of forced tech sharing and government authority.

IBM experienced a $40 billion stock drop after Anthropic unveiled AI tools for COBOL translation. However, industry experts and IBM argue that this reaction stems from a misunderstanding: translating COBOL code is distinct from comprehensive mainframe modernization, which involves complex architectural redesign and ensuring critical system reliability. Enterprises are advised to approach new AI tools with caution, conducting pilots to assess actual ROI for modernization efforts.

Anthropic has accused Chinese AI labs DeepSeek, Moonshot AI, and MiniMax of \

Creator Economy Shifts Beyond Ads, India's AI in Focus The rapidly evolving creator economy is seeing a significant pivot away from traditional ad revenue, with leading creators establishing diverse business empires.
This article addresses the AI beliefs of Anthropic and C.E.O., Dario Amodei, strictly based on the provided source information. It clarifies that the source, limited to "nytimes.com" and "NYT Technology," offers no specific details on this topic. All sections acknowledge the absence of direct information while contextualizing the source as a general news platform.