
New exploit highlights critical vulnerabilities in MCP-connected AI assistants, prompting urgent cybersecurity measures
Operant AI, the only Runtime AI Defense Platform, has revealed a powerful zero-click attack named Shadow Escape that targets AI agents and assistants, including ChatGPT, Claude, Gemini, and other LLM-powered tools. The exploit leverages the Model Context Protocol (MCP) to exfiltrate sensitive data, operating entirely within authorized identity boundaries and remaining invisible to traditional cybersecurity controls.
As enterprises increasingly deploy agentic AI via MCP servers for secure integration with internal tools, APIs, and databases, Shadow Escape demonstrates a previously unseen class of threats. According to Operant AI research, trillions of private records may be vulnerable to zero-click data exfiltration chains.
How Shadow Escape works
Unlike conventional attacks that rely on phishing or user error, Shadow Escape uses legitimate MCP connections to manipulate AI agents. The attack unfolds in three stages:
· Infiltration – Malicious instructions are embedded in seemingly legitimate documents uploaded to AI agents.
· Discovery – The AI agent autonomously identifies and surfaces sensitive data from connected databases.
· Exfiltration – Hidden directives instruct the agent to transmit data to external endpoints, bypassing IT detection.
This method enables the extraction of critical personally identifiable information (PII), including Social Security numbers and medical records, creating risks for identity theft, financial fraud, and compliance violations like HIPAA and PCI.
Industry-wide implications
Shadow Escape is not limited to a single AI platform and can affect any MCP-enabled AI agent, including enterprise-specific AI copilots in healthcare, finance, and customer service. “Standard MCP configurations create unprecedented attack surfaces that bypass traditional security methods,” said Vrajesh Bhavsar, CEO of Operant AI.
Recommendations for enterprises
Operant AI urges organizations to conduct audits of MCP-connected AI agents, implement runtime AI defense systems to detect zero-click attacks, enforce MCP trust zones, and monitor sensitive data flows with real-time redaction. The company has reported the issue to OpenAI and initiated the CVE designation process to help secure AI environments industry-wide.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.