industry: Enterprise MCP adoption is outpacing security controls
Enterprise MCP adoption is outpacing security controls Enterprises are rapidly integrating Model Context Protocol (MCP) and deploying autonomous AI agents, yet security frameworks are struggling to keep pace, creating a

Enterprise MCP adoption is outpacing security controls
Enterprises are rapidly integrating Model Context Protocol (MCP) and deploying autonomous AI agents, yet security frameworks are struggling to keep pace, creating a significant new attack surface. This alarming trend, highlighted by industry leaders at a recent VentureBeat AI Impact Series event, suggests that existing human-centric security models are ill-equipped to govern AI systems that operate with unprecedented access and autonomy, potentially opening doors to serious data breaches.
AI Agents Introduce Unprecedented Attack Vectors
AI agents now command more access and connections within enterprise systems than any other software, making them the largest attack surface security teams have ever confronted. Spiros Xanthos, founder and CEO of Resolve AI, warned that if this new vector is exploited, it could lead to data breaches or worse. Traditional security frameworks, designed for human interactions, lack an agreed-upon construct for autonomous AI agents with their own identities and personas.
Jon Aniano, SVP of product and CRM applications at Zendesk, described the current situation as the "wild, wild West," with agentic AI advancing faster than enterprises can establish guardrails. The lack of a defined technical agent-to-agent protocol further complicates efforts to balance user expectations with platform safety.
MCP's Permissive Nature Magnifies Vulnerabilities
While MCP servers simplify integration between agents, tools, and data, they are inherently "extremely permissive," according to Aniano. He contended that MCP can be even more problematic than traditional APIs, which typically have more robust controls. As enterprises move towards potentially hundreds of agents, each with its own identity and access, managing this complex matrix becomes a daunting task.
Even as companies like Resolve AI develop autonomous agents for critical functions like site reliability engineering (SRE), Xanthos acknowledged a complete industry-wide void in frameworks for these systems. This places the burden of defining agent restrictions on builders, who must earn customer trust in their decisions. Existing security tools with fine-grained access, such as Splunk's index-level controls, offer some promise but are generally considered insufficient for the era of widespread agent deployment.
Untangling AI's Role in Authentication and Accountability
AI's increasing involvement in customer interactions, particularly within CRM platforms like Zendesk, introduces complex audit trails and accountability dilemmas. Aniano questioned who is at fault when an AI, instructed by a human, takes an incorrect action, especially in scenarios involving multiple AI components and human agents.
Of particular concern is AI's role in authentication tasks, such as processing one-time passwords (OTP) or two-step verification methods. The risk of an AI mis-authenticating or misidentifying a user could lead to sensitive data leakage or create critical entry points for attackers. While many highly regulated industries still mandate human involvement in authentication, the industry is exploring a future where specialized agents might perform human-level authentication interactions.
Enterprises Hesitate on Full Agent Autonomy
Despite the clear trajectory toward more autonomous systems, many enterprises remain cautious about granting AI agents full workflow authority without human review. This "good fear," as Xanthos described it, is a significant factor in holding back widespread standing authorization for agents.
Resolve AI is beginning to offer agents standing authorization for "generally safe" coding tasks, gradually expanding to other low-risk scenarios. However, both experts agree that highly risky situations, where AI mistakes could "mutate the state of the production system," will always necessitate stringent oversight. The rapid pace of this technological shift, likened to mobile adoption, underscores the urgent need for a collective industry response to these security challenges.
Immediate Steps for Bridging the Security Gap
While comprehensive solutions are still evolving, enterprises can take interim measures using existing tools. Xanthos pointed to capabilities like Splunk's fine-grained, index-level access controls as a way to manage agent permissions. Zendesk's approach offers a practical blueprint, utilizing declaratively designed API calls with explicitly sanctioned actions, strict access and scope limits, and mandatory human review before expanding agent authorizations.
This principle, described by Aniano as "always checking those gates and seeing how we can widen the aperture," emphasizes a cautious, validated approach to expanding agent permissions rather than granting broad standing authorization prematurely. This incremental strategy is crucial as the industry navigates the complexities of securing an increasingly agent-driven enterprise environment.
Q: Why is MCP adoption making security worse for enterprises?
A: MCP simplifies integration between AI agents, tools, and data, but its design is often "extremely permissive," meaning agents can have broad access without sufficient granular controls. This creates a larger, less manageable attack surface compared to traditional APIs with more established security protocols.
Q: What are the biggest security concerns with AI agents taking over authentication?
A: When AI agents handle tasks like sending and processing one-time passwords or other multi-factor authentication methods, there's a significant risk of mis-authentication or misidentification. This could lead to unauthorized access, sensitive data leakage, or open pathways for attackers to compromise systems.
Q: What immediate steps can security teams take to address these risks?
A: Security teams can implement fine-grained access controls where available, such as index-level access in tools like Splunk, to limit agent permissions. They should also adopt strict policies for agent interactions, ensuring API calls are declaratively designed with explicitly sanctioned actions, and maintain human oversight and review before expanding agent authorizations.
Related articles
Top Wireless Chargers of 2026 Revealed: Power, Design & Versatility
Top Wireless Chargers of 2026 Revealed: Power, Design & Versatility Tested WIRED's latest in-depth review for 2026 unveils the 18 top wireless chargers, showcasing a significant leap in charging technology, design, and
The Trump phone sure looks a lot like this HTC handset: HTC U24 Pro
A new investigation reveals the upcoming Trump T1 Phone closely resembles the HTC U24 Pro, strongly suggesting both devices share an undisclosed Original Design Manufacturer (ODM). This link to a mid-range phone from two years ago, which received middling reviews, raises questions about the T1 Phone's potential performance and flagship claims.
How to Make Your Home Safer with Home Assistant This Weekend
Learn to enhance your home's safety with three essential Home Assistant projects this weekend, from emergency automation to building your own security system.
South Korea Opens Door for Full Google Maps Services
South Korea has conditionally approved Google to export high-precision geographic information, finally enabling full Google Maps services like real-time navigation. This decision reverses a decade-long restriction based on national security concerns, opening the door for tourists and residents to use comprehensive Google Maps while introducing strict data security protocols. Seoul aims to boost tourism and strengthen its domestic geospatial industry, despite potential ripples in the local map market.
Iowa's Right-to-Repair Bill: A Dev's View on Tractor Tech Battle
A new Iowa bill granting farmers the right to repair their equipment poses a significant challenge to manufacturers like John Deere. For developers, this necessitates a re-evaluation of proprietary hardware, embedded software, and diagnostic ecosystems, pushing towards more open, modular, and repairable product designs. It highlights a broader industry trend towards user autonomy over complex, embedded systems.
The Agentic Shift: Block's 4,000+ AI-Driven Layoffs & What It Means
Block, Jack Dorsey's company, cut over 4,000 staff (40%) despite strong financial performance, attributing it to new AI efficiencies and a pivot to an "intelligence-native" operational model. This move, driven by a focus on "agentic AI infrastructure," signals a fundamental shift in how tech companies might scale and manage operations. It prompts other enterprises to audit their own workflows for similar AI-driven consolidation.





