The CISO’s Guide to Model Context Protocol (MCP)
None
<p><span data-contrast="auto">As engineering teams race to adopt the Model Context Protocol (MCP) to harness the power of agentic AI, a more cautious conversation dominates security leaders’ mindshare. While the potential for innovation is clear, the primary question for CISOs and CIOs is more fundamental: how are we going to manage the growing risk?</span><span data-ccp-props='{"201341983":0,"335559685":100,"335559737":113,"335559738":253,"335559740":276}'> </span></p><p><span data-contrast="auto">The answer is complex because MCP represents more than just a new integration standard. It creates a dynamic and autonomous layer of machine-to-machine communication that significantly expands an organization’s attack surface. This brings a new class of threats that traditional security tools, built for predictable human interactions, were simply not designed to tackle.</span><span data-ccp-props='{"201341983":0,"335559685":100,"335559737":113,"335559738":243,"335559740":276}'> </span></p><div class="code-block code-block-13" style="margin: 8px 0; clear: both;"> <style> .ai-rotate {position: relative;} .ai-rotate-hidden {visibility: hidden;} .ai-rotate-hidden-2 {position: absolute; top: 0; left: 0; width: 100%; height: 100%;} .ai-list-data, .ai-ip-data, .ai-filter-check, .ai-fallback, .ai-list-block, .ai-list-block-ip, .ai-list-block-filter {visibility: hidden; position: absolute; width: 50%; height: 1px; top: -1000px; z-index: -9999; margin: 0px!important;} .ai-list-data, .ai-ip-data, .ai-filter-check, .ai-fallback {min-width: 1px;} </style> <div class="ai-rotate ai-unprocessed ai-timed-rotation ai-13-1" data-info="WyIxMy0xIiwxXQ==" style="position: relative;"> <div class="ai-rotate-option" style="visibility: hidden;" data-index="1" data-name="U2hvcnQ=" data-time="MTA="> <div class="custom-ad"> <div style="margin: auto; text-align: center;"><a href="https://www.techstrongevents.com/cruisecon-virtual-west-2025/home?ref=in-article-ad-2&utm_source=sb&utm_medium=referral&utm_campaign=in-article-ad-2" target="_blank"><img src="https://securityboulevard.com/wp-content/uploads/2025/10/Banner-770x330-social-1.png" alt="Cruise Con 2025"></a></div> <div class="clear-custom-ad"></div> </div></div> </div> </div><p><span data-contrast="auto">With traditional APIs, say, we secured a predictable entry point—the door. But with MCP, we have to secure the ghost in the machine. Because the biggest risk is no longer just unauthorized access, but an authorized agent making an unforeseen, catastrophic decision.</span><span data-ccp-props='{"201341983":0,"335559685":100,"335559737":113,"335559738":243,"335559740":276}'> </span></p><p><span data-contrast="auto">And so, for security leaders, successfully navigating this escalating landscape requires a clear-eyed understanding of these emerging risks and a pragmatic strategy for enterprise adoption.</span><span data-ccp-props='{"201341983":0,"335559685":100,"335559737":113,"335559738":242,"335559740":276}'> </span></p><h3 aria-level="1"><strong>An Expanded Attack Surface </strong></h3><p><span data-contrast="auto">The first reality for security leaders to confront is that while MCP is designed to break down data silos, it also dramatically expands the organization’s attack surface. </span><a href="http://linkedin.com/in/priyanka-tembey-a1947611"><span data-contrast="none">Priyanka Tembey</span></a><span data-contrast="auto">, Co-Founder and CTO at </span><a href="https://www.operant.ai/"><span data-contrast="none">Operant</span></a><span data-contrast="auto">, explains that each new tool or data source connected via the protocol brings its own unique set of compliance requirements and operational risks into a now-interconnected ecosystem. This creates two primary challenges that are top-of-mind for today’s CISOs and Chief AI Officers.</span><span data-ccp-props='{"201341983":0,"335559685":100,"335559737":113,"335559738":302,"335559740":276}'> </span></p><p><span data-contrast="auto">The first is the significant risk of overprivileged agent access. In the rush to enable functionality, engineering teams may grant AI agents broader permissions than are strictly necessary for their tasks. Tembey warns that this common mistake dramatically increases the potential impact if an agent is compromised, as a single rogue agent could access a wide array of connected systems and data sources.</span><span data-ccp-props='{"201341983":0,"335551550":6,"335551620":6,"335559685":100,"335559737":457,"335559738":244,"335559740":276}'> </span></p><p><span data-contrast="auto">The second major concern is the lack of visibility and auditability. Tembey notes that the dynamic, machine-to-machine communication common in agentic workflows often bypasses traditional monitoring tools that are built to track predictable, human-driven interactions. This creates a dangerous visibility gap, making it difficult for security teams to detect anomalies, audit agent behavior for compliance, or trace the origin of a security incident. For security leaders, this means a proactive threat modeling exercise is a non-negotiable first step in any MCP initiative.</span><span data-ccp-props='{"201341983":0,"335559685":100,"335559737":187,"335559738":244,"335559740":276}'> </span></p><p><span data-contrast="auto">“For us, this expanded attack surface is fundamentally a testing challenge,” comments </span><a href="https://www.linkedin.com/in/sai-krishna-3755407b/"><span data-contrast="none">Sai Krishna</span></a><span data-contrast="auto">, Director of Engineering at </span><a href="https://www.lambdatest.com/"><span data-contrast="none">LambdaTest</span></a><span data-contrast="auto">, an AI-native software testing platform. “Because AI agents operate dynamically, you can’t just run a traditional security scan and call it a day. We see the solution as providing sandboxed, instant infrastructure for every agent interaction.” This allows security teams to rigorously test agent permissions and behavior in an isolated environment before they ever touch production data, effectively shifting security testing left for the new AI stack.</span><span data-ccp-props='{"201341983":0,"335559685":100,"335559737":80,"335559738":73,"335559740":276}'> </span></p><p><span data-contrast="auto">But while this sandboxed approach provides the necessary isolation, the challenge can sometimes be more profound than simply containing risk. Because the focus must also shift to validating the agent’s autonomous reasoning within those environments.</span><span data-ccp-props='{"201341983":0,"335559685":100,"335559737":113,"335559738":244,"335559740":276}'> </span></p><p><a href="https://www.linkedin.com/in/srinivasan-sekar/"><span data-contrast="none">Srinivasan Sekar</span></a><span data-contrast="auto">, also a Director of Engineering at LambdaTest who works alongside Sai Krishna and oversees development for </span><a href="https://www.lambdatest.com/kane-ai"><span data-contrast="none">Kane A</span><span data-contrast="none">I</span></a> <span data-contrast="auto">(an end-to-end testing agent), expands on this by saying the issue is systemic. “The real change here is that we’re not just testing applications anymore. We’re also testing autonomous decision-makers that can connect different tools and data sources in ways we can’t fully predict when we design them. Conventional security testing assumes a limited number of execution paths; however, agentic systems introduce computational complexity that escalates exponentially with each MCP connection. At LambdaTest, we treat every interaction with an agent as a possible security breach until we can prove otherwise. We have set up systems that can create thousands of temporary test environments at the same time. Each one is set up to record not only what the agent does, but also why it made that choice based on its surroundings and the tools it had at its command.” This level of observability changes security from a checkpoint to a feedback loop that keeps going, which makes agents more trustworthy over time.</span><span data-ccp-props='{"201341983":0,"335559685":100,"335559737":191,"335559738":242,"335559740":276}'> </span></p><p><span data-contrast="auto">For CISOs, this provides two immediate actions to take. First, mandate a “Principle of Least Privilege” (PoLP) review for every new AI agent before it is deployed, ensuring its permissions—which should be defined in clear</span><a href="https://optimizing.cloud/your-ai-needs-machine-readable-api-specifications/"><span data-contrast="none"> API Specifications</span></a><span data-contrast="auto">—are scoped to the narrowest possible function. Second, initiate a “threat-informed validation” program. It involves creating a library of simulated attack scenarios—such as an agent attempting to escalate privileges or access unauthorized data—and continuously running them against agents in a sandboxed test environment. This proactive approach allows security teams to find and fix vulnerabilities before they can be exploited in the production ecosystem.</span><span data-ccp-props='{"201341983":0,"335559685":100,"335559737":113,"335559738":249,"335559740":276}'> </span></p><h3 aria-level="1"><strong>A Framework for Safe Adoption (Start Internally) </strong></h3><p><span data-contrast="auto">Given these challenges, the appropriate response for a security-conscious enterprise is not to block the technology, but to adopt them within a controlled, risk-aware framework. </span><a href="https://www.linkedin.com/in/loicberthou/"><span data-contrast="none">Loïc Berthou,</span></a> <span data-contrast="auto">CTO of </span><a href="https://www.qlarifi.com/"><span data-contrast="none">Qlarifi</span></a><span data-contrast="auto">, offers a pragmatic perspective based on his experience in risk and compliance. He argues that while MCP is a valuable standard for thinking about the future of AI-native APIs, it is not yet mature enough to handle highly sensitive information or business-critical workflows, pointing to gaps in robust security and encryption capabilities.</span><span data-ccp-props='{"201341983":0,"335559685":100,"335559737":103,"335559738":303,"335559740":276}'> </span></p><p><span data-contrast="auto">This assessment leads to a clear strategic recommendation: a “crawl-walk-run” approach that begins with internal, low-risk experimentation. Berthou advises that organizations should first limit the use of MCP to internal “dog-fooding” on very specific and narrow use cases. The primary goal of this strategy is to deliberately limit the “threat surface” that is exposed to the AI agent, allowing security and engineering teams to learn the protocol’s nuances in a contained environment.</span><span data-ccp-props='{"201341983":0,"335559685":100,"335559737":187,"335559738":73,"335559740":276}'> </span></p><p><span data-contrast="auto">A perfect example of a safe and effective first step is to expose internal technical documentation via an MCP server. This allows an AI agent to provide up-to-date information to developers, delivering an immediate productivity benefit to the engineering team. Crucially, this use case involves</span><span data-ccp-props='{"201341983":0,"335559685":100,"335559737":113,"335559738":243,"335559740":276}'> </span><span data-contrast="auto">non-sensitive data and is contained entirely within the organization, providing a high-value, low-risk project to build expertise and test security controls before ever considering more critical,</span><span data-ccp-props='{"201341983":0,"335559685":100,"335559737":150,"335559738":3,"335559740":276}'> </span><span data-contrast="auto">external-facing applications.</span><span data-ccp-props='{"335559685":100,"335559737":0,"335559738":1}'> </span></p><p><span data-contrast="auto">Sai Krishna of LambdaTest agrees, noting that this phased approach must be matched with increasingly rigorous testing. “This ‘crawl-walk-run’ model is exactly how we approach validation in this new paradigm. It’s not just about starting with internal data; it’s about scaling the </span><i><span data-contrast="auto">rigor </span></i><span data-contrast="auto">of testing at each step. On our AI-native testing platform, this means an agent might start with simple functional tests against a documentation server. But before it ‘runs,’ it must graduate to full-scale performance and security validation across thousands of sandboxed environments. This ensures that by the time an agent is interacting with critical systems, its behavior is not just functional, but predictable and secure.”</span><span data-ccp-props='{"201341983":0,"335559685":100,"335559737":80,"335559738":0,"335559740":276}'> </span></p><p><span data-contrast="auto">And so to implement this framework, security leaders can create a formal “AI Use Case Risk Matrix.” This matrix should classify all potential MCP projects based on two axes: data sensitivity and business criticality. This provides a clear, data-driven methodology for approving projects, ensuring that “crawl” phase initiatives are limited to low-risk quadrants. In parallel, leaders can establish a “Graduated Testing Protocol” that maps mandatory security validation requirements—from basic vulnerability scans to full-scale red-teaming exercises—to each risk level. This ensures that as an agent’s access and importance grows, so does the rigor of its security testing.</span><span data-ccp-props='{"201341983":0,"335559685":100,"335559737":113,"335559738":244,"335559740":276}'> </span></p><h3 aria-level="1"><strong>What Leaders Are Missing </strong></h3><p><span data-contrast="auto">Beyond the immediate architectural risks, a successful MCP security strategy must also account for a new and more sophisticated class of threats that are not yet widely discussed. Tembey of Operant warns that security leaders need to look beyond the known threat landscape and prepare for novel, AI-specific attacks that could bypass traditional defenses entirely.</span><span data-ccp-props='{"201341983":0,"335559685":100,"335559737":80,"335559738":303,"335559740":276}'> </span></p><p><span data-contrast="auto">One of the most insidious of these is what she terms “retrieval-agent deception.” This attack involves poisoning public or third-party datasets with hidden, malicious MCP commands. When a well-meaning AI agent retrieves and processes this poisoned data to formulate a response, it may unknowingly execute the embedded commands, creating a subtle but powerful supply-chain attack vector that is incredibly difficult to detect.</span><span data-ccp-props='{"201341983":0,"335559685":100,"335559737":103,"335559738":243,"335559740":276}'> </span></p><p>Tembey also points to long-term strategic risks, such as “quantum-prepared attacks,” where adversaries collect encrypted MCP traffic today with the intention of decrypting it years from now with future quantum computers. Internally, she highlights the growing governance challenge of “Shadow AI,” which occurs when developers, in their eagerness to innovate, connect agents to unapproved tools or data sources, bypassing critical security and compliance reviews and creating significant organizational risk. For security leaders, this evolving landscape means that threat modeling for AI cannot be a one-time event; it must be a continuous, forward-looking process. Srinivasan Sekar says, “These new threats show that security for AI agents can’t be an afterthought; it has to be a part of the development lifecycle”. “What’s missing is a focus on ongoing validation. We think that as a quality engineering platform, you should always try to mimic these new attacks in a safe test environment. Because this lets engineering teams make strong agents that can find and reject bad data or let them know when they’re being asked to do something they shouldn’t. It’s about going from passive defence to active, automated security checks.</p><p>For security leaders, the immediate action is to formally update the organization’s threat modeling process. CISOs should mandate that all security reviews now include a dedicated section for “AI-Specific Attack Vectors,” explicitly requiring teams to assess the risks of retrieval-agent deception and Shadow AI. Furthermore, this new intelligence must be fed directly into the testing cycle by creating an “adversarial simulation pipeline.” This involves building a suite of automated tests that actively try to trick agents with poisoned data or probe for connections to unsanctioned tools, turning the security team’s forward-looking threat intelligence into an automated, preventative control.</p><h3><strong>Hardening Defenses From Runtime Awareness to Community Tools</strong></h3><p>Defending against this new threat landscape requires a security model that moves beyond static, perimeter-based controls. Tembey of Operant argues for the adoption of “runtime-aware defenses,” a strategy designed for the dynamic nature of agentic AI. Because many new risks live inside the agent’s logic, prompts, and tool responses, she explains that defenses must operate in real time at this new layer. This includes the continuous monitoring of agentic workflows to detect anomalous behavior, the inline redaction of sensitive data before it reaches a tool, and the use of adaptive internal firewalls to block unauthorized data transfers at network egress points.</p><p>For CISOs, this means the first actionable step is to begin evaluating a new category of security solutions, which can be thought of as “Agentic Security Posture Management” platforms. The immediate priority is to issue RFIs for tools that provide real-time visibility into agent behavior and can enforce data redaction and egress policies dynamically. This shifts the security budget from a purely preventative posture to one that includes robust, real-time detection and response capabilities tailored for AI.</p><p>While building this internal defense is the critical first step, security leaders recognize that no single organization can defend in isolation, which is why community collaboration has become essential. Sai Krishna stresses how important it is to help keep the ecosystem safe for everyone. His team has open-sourced a tool called Secure Hulk in addition to building their own secure architecture.</p><p>“We made Secure Hulk because we knew that MCP security can’t be a competitive advantage; it has to be a shared responsibility,” he posits. “This tool lets any organization scan its MCP servers for common vulnerabilities, which lets the whole community find and fix problems before they happen. The whole ecosystem becomes stronger when everyone’s defenses are stronger.”</p><p>This highlights a clear directive for security leaders: formally dedicate resources to “Open Source Security Engagement.” A practical implementation of this is to assign a percentage of a security engineer’s time specifically to vetting, contributing to, and adopting community-vetted tools. By making community participation a formal part of the security program, organizations can leverage the collective expertise of the industry to harden their own defenses.</p><p>This community-driven approach also extends beyond shared tools to the even more powerful concept of shared intelligence. Sekar says that this community-driven approach goes beyond tools to include shared threat intelligence. “We’re seeing attack patterns that no one company could figure out on their own because they don’t have enough data. We’re building a collective immune system for agentic AI by giving back anonymised telemetry and vulnerability signatures to the community.” This two-pronged approach provides CISOs with a clear and long-term plan for dealing with this new frontier by adding money into advanced internal defences and participating in community-led security efforts.</p><p>The final, crucial action for CISOs is to operationalize this exchange of threat intelligence. This means joining industry-specific groups, such as an ISAC (Information Sharing and Analysis Center), and establishing a formal process for contributing anonymized telemetry from internal agentic systems. By actively participating in this collective immune system, organizations not only strengthen the entire ecosystem but also gain early warnings of emerging threats, allowing them to adapt their internal defenses before they are targeted.</p><h3><strong>Guiding, Not Gating, the AI Frontier</strong></h3><p>For security leaders, the rise of the Model Context Protocol represents a critical inflection point. This technology offers undeniable transformative potential, but it also fundamentally alters the enterprise threat landscape in ways that require a new security approach. The path forward is not to block this innovation, but to guide it with a pragmatic, risk-based strategy.</p><p>And this begins with the cautious approach of starting with contained, internal experiments to limit the initial threat surface and build institutional knowledge. It then requires investment in the new class of runtime-aware defenses needed to monitor dynamic agentic behavior, and active engagement with the broader community to develop and share collective security tools. By embracing this proactive and adaptive security posture, CISOs can transform their role from gatekeepers of the old paradigm to essential architects of a secure and innovative AI-native future.</p><div class="spu-placeholder" style="display:none"></div><div class="addtoany_share_save_container addtoany_content addtoany_content_bottom"><div class="a2a_kit a2a_kit_size_20 addtoany_list" data-a2a-url="https://securityboulevard.com/2025/10/the-cisos-guide-to-model-context-protocol-mcp/" data-a2a-title="The CISO’s Guide to Model Context Protocol (MCP)"><a class="a2a_button_twitter" href="https://www.addtoany.com/add_to/twitter?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2025%2F10%2Fthe-cisos-guide-to-model-context-protocol-mcp%2F&linkname=The%20CISO%E2%80%99s%20Guide%20to%20Model%20Context%20Protocol%20%28MCP%29" title="Twitter" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_linkedin" href="https://www.addtoany.com/add_to/linkedin?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2025%2F10%2Fthe-cisos-guide-to-model-context-protocol-mcp%2F&linkname=The%20CISO%E2%80%99s%20Guide%20to%20Model%20Context%20Protocol%20%28MCP%29" title="LinkedIn" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_facebook" href="https://www.addtoany.com/add_to/facebook?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2025%2F10%2Fthe-cisos-guide-to-model-context-protocol-mcp%2F&linkname=The%20CISO%E2%80%99s%20Guide%20to%20Model%20Context%20Protocol%20%28MCP%29" title="Facebook" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_reddit" href="https://www.addtoany.com/add_to/reddit?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2025%2F10%2Fthe-cisos-guide-to-model-context-protocol-mcp%2F&linkname=The%20CISO%E2%80%99s%20Guide%20to%20Model%20Context%20Protocol%20%28MCP%29" title="Reddit" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_email" href="https://www.addtoany.com/add_to/email?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2025%2F10%2Fthe-cisos-guide-to-model-context-protocol-mcp%2F&linkname=The%20CISO%E2%80%99s%20Guide%20to%20Model%20Context%20Protocol%20%28MCP%29" title="Email" rel="nofollow noopener" target="_blank"></a><a class="a2a_dd addtoany_share_save addtoany_share" href="https://www.addtoany.com/share"></a></div></div>