News

F5 Strengthens, Scales & Sustains AI Security With Integrated Runtime Protection

  • None--securityboulevard.com
  • published date: 2026-01-26 00:00:00 UTC

None

<p><span style="font-weight: 400;">Enterprise technology is entering a state of deeper amalgamation and flux. Not because data integration suddenly became important (spoiler alert, it is… and we don’t have enough of it), not because we’re fixing <a href="https://securityboulevard.com/2024/05/navigating-data-sovereignty-and-compliance-with-data-federation/" target="_blank" rel="noopener">broken data sovereignty issues</a> to address the new world order for information governance (same spoiler, mostly)… and not because the humble application programming interface has reached a new level of de facto acceptance in the containerised world of Kubernetes and cloud (again, same reveal there).</span></p><p><span style="font-weight: 400;">No, well okay yes… software amalgamation is happening for all of those reasons, but there is also a new coming together as a result of the intersection point between AI models and data, APIs, data repositories, other applications and components, plus of course users themselves.</span></p><p><span style="font-weight: 400;">F5 says it is fully aware of these trends and is now extending its core technology proposition to accommodate the way computing entities are interacting. The F5 Application Delivery and Security Platform (ADSP) platform is designed to deliver and secure every app, every API, anywhere: on-premises, in the cloud, at the edge and across hybrid, multicloud environments. As such, the company is now announcing </span><a href="https://www.f5.com/products/ai-guardrails" target="_blank" rel="noopener"><span style="font-weight: 400;">F5 AI Guardrails</span></a><span style="font-weight: 400;"> and </span><a href="https://www.f5.com/products/ai-red-team" target="_blank" rel="noopener"><span style="font-weight: 400;">F5 AI Red Team</span></a><span style="font-weight: 400;">.</span></p><h3><strong>Guardrails: Ready-Baked or Custom-Built</strong></h3><p><span style="font-weight: 400;">These services have been engineered to secure mission-critical enterprise AI systems with what F5 insists is a comprehensive end-to-end “lifecycle approach to AI runtime security” today. This means an enhanced set of abilities to connect and protect AI agents with both out-of-the-box and custom-built guardrails. </span></p><p><span style="font-weight: 400;"> </span><span style="font-weight: 400;">But why both?</span></p><p><span style="font-weight: 400;">In some cases, out-of-the-box guardrails work because a deployment is running standardised cloud (and other)  platform technologies in a narrowly enough defined use case to apply controls that would fit any other similar codebase, dataset and workflow.</span></p><p><span style="font-weight: 400;">Perhaps more commonly, these more pioneering security offerings are needed to align with customer needs for flexible deployment i.e. to span model-agnostic protection where the deployment itself is more composable and complex… and where teams need the ability to tailor and adapt AI security policies in real time. Crucially, this level is also where software engineering units need to be able to apply controls at the application layer, where AI interactions occur, hence our initial new flux and amalgam emphasis. </span></p><p><span style="font-weight: 400;">F5 AI Guardrails and F5 AI Red Team are already deployed at leading Fortune 500 enterprises across multiple industries globally, including in highly regulated financial services and healthcare organizations. </span></p><h3><strong>Data Leaks &amp; Unpredictable Models</strong></h3><p><span style="font-weight: 400;">“Traditional enterprise governance cannot keep up with the velocity of AI,” said Kunal Anand, chief product officer at F5. “When policy lags adoption, you get data leaks and unpredictable model behaviour. Organisations need defences that are as dynamic as the models themselves. F5 AI Guardrails secures the traffic in real time, turning a black box into a transparent system, while F5 AI Red Team proactively finds vulnerabilities before they reach production. This allows organisations to stop fearing risk and start shipping apps and features with confidence.” </span></p><p><span style="font-weight: 400;">As enterprises accelerate AI adoption across internal workflows and mission-critical decision-making, the risk landscape is rapidly shifting. Anand suggests that organisations now grapple not only with external attackers, but also adversarial manipulation of models, data leakage, unpredictable user interactions and growing compliance obligations. </span></p><p><span style="font-weight: 400;">By pairing F5 AI Guardrails and F5 AI Red Team with traditional infrastructure protection – including API security, web application firewalls and DDoS defences, enterprises can secure AI systems alongside existing applications, improving visibility and policy consistency without relying on fragmented point solutions.</span></p><h3>Transforming Risk Into Confident AI Deployment<span style="font-weight: 400;"> </span></h3><p><span style="font-weight: 400;">“As organisations race to operationalise AI, many security tools address only fragments of the rapidly expanding attack surface. F5 is one of the first vendors delivering a complete AI security solution – combining real-time runtime defences with offensive security testing and pre-built attack patterns to help organisations deploy AI with confidence. Doing so requires addressing the risks inherent in how AI systems operate in practice, where models vary widely in capability and behavior, and also interact with sensitive data, users, APIs, and other systems in ways legacy tools weren’t built to manage,” noted Anand and team.</span></p><p><span style="font-weight: 400;">As described, F5 AI Guardrails provides a “model-agnostic runtime security layer” designed to protect every AI model, app and agent across every cloud and deployment environment with consistent policy enforcement. </span></p><p><span style="font-weight: 400;">As the number of models grows into the millions, the company says that AI Guardrails delivers consistent protection against adversarial threats such as prompt injection and jailbreak attacks, prevents sensitive data leakage… and it enforces corporate and regulatory obligations, including GDPR and the EU AI Act. Uniquely, AI Guardrails also delivers in-depth observability and auditability of AI inputs and outputs so teams can see not just what the model did, but why it did it, which is a core need for governance and compliance in regulated industries. </span></p><h3>Continuous Sustained Assurance</h3><p><span style="font-weight: 400;">Complementing runtime protection, F5 AI Red Team delivers scalable, automated adversarial testing that simulates both common and obscure threat vectors, powered by what is said to be a preeminent AI vulnerability database – adding over 10,000 new attack techniques every month as real-world threats evolve. AI Red Team reveals where models can provide dangerous or unpredictable outputs, and its insights feed directly back into AI Guardrails policies so defenses evolve as threats and models themselves change. </span></p><p><span style="font-weight: 400;">Together, F5 is telling us that AI Guardrails and AI Red Team establish a continuous AI security feedback loop: proactive assurance, adaptive runtime enforcement, centralised governance and ongoing improvement. </span></p><div class="spu-placeholder" style="display:none"></div><div class="addtoany_share_save_container addtoany_content addtoany_content_bottom"><div class="a2a_kit a2a_kit_size_20 addtoany_list" data-a2a-url="https://securityboulevard.com/2026/01/f5-strengthens-scales-sustains-ai-security-with-integrated-runtime-protection/" data-a2a-title="F5 Strengthens, Scales &amp; Sustains AI Security With Integrated Runtime Protection "><a class="a2a_button_twitter" href="https://www.addtoany.com/add_to/twitter?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F01%2Ff5-strengthens-scales-sustains-ai-security-with-integrated-runtime-protection%2F&amp;linkname=F5%20Strengthens%2C%20Scales%20%26%20Sustains%20AI%20Security%20With%20Integrated%20Runtime%20Protection%C2%A0" title="Twitter" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_linkedin" href="https://www.addtoany.com/add_to/linkedin?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F01%2Ff5-strengthens-scales-sustains-ai-security-with-integrated-runtime-protection%2F&amp;linkname=F5%20Strengthens%2C%20Scales%20%26%20Sustains%20AI%20Security%20With%20Integrated%20Runtime%20Protection%C2%A0" title="LinkedIn" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_facebook" href="https://www.addtoany.com/add_to/facebook?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F01%2Ff5-strengthens-scales-sustains-ai-security-with-integrated-runtime-protection%2F&amp;linkname=F5%20Strengthens%2C%20Scales%20%26%20Sustains%20AI%20Security%20With%20Integrated%20Runtime%20Protection%C2%A0" title="Facebook" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_reddit" href="https://www.addtoany.com/add_to/reddit?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F01%2Ff5-strengthens-scales-sustains-ai-security-with-integrated-runtime-protection%2F&amp;linkname=F5%20Strengthens%2C%20Scales%20%26%20Sustains%20AI%20Security%20With%20Integrated%20Runtime%20Protection%C2%A0" title="Reddit" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_email" href="https://www.addtoany.com/add_to/email?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F01%2Ff5-strengthens-scales-sustains-ai-security-with-integrated-runtime-protection%2F&amp;linkname=F5%20Strengthens%2C%20Scales%20%26%20Sustains%20AI%20Security%20With%20Integrated%20Runtime%20Protection%C2%A0" title="Email" rel="nofollow noopener" target="_blank"></a><a class="a2a_dd addtoany_share_save addtoany_share" href="https://www.addtoany.com/share"></a></div></div>