News

The Attack Surface of Cloud-Based Generative AI Applications is Evolving

  • Alastair Cooke--securityboulevard.com
  • published date: 2025-11-26 00:00:00 UTC

None

<p><span style="font-weight: 400;">It is the right time to talk about this. Cloud-based Artificial Intelligence, or specifically those big, powerful Large Language Models we see everywhere, they’ve completely changed the game. They’re more than just a new application tier. They’re an entirely new attack surface.</span></p><p><span style="font-weight: 400;">You’ve moved your critical applications to the public cloud. You did it for scalability, for cost efficiency, and for the simplicity of managed services. That makes sense. Now, think about AI. Where else are you going to get the on-demand GPU and TPU access needed to develop, train, and deploy these complex, agentic services? The cloud is the natural, inevitable platform for AI. That’s why security teams need to stop thinking about “cloud security” and start thinking about “AI security in the cloud.” It’s a subtle but critical shift in mindset.</span></p><p><span style="font-weight: 400;">The </span><a href="https://techfieldday.com/appearance/fortinet-presents-at-cloud-field-day-24/"><span style="font-weight: 400;">Fortinet presentation at Cloud Field Day</span></a><span style="font-weight: 400;"> showed us just how simple these attacks can be and how existing tools can help protect your AI applications from exploitation.</span></p><h3><strong>The New Attack Surface is the Model Itself</strong></h3><p><span style="font-weight: 400;">When we talk about traditional application security, we’re focused on protecting the database, the server, or the API gateway. With AI applications, you still have to worry about all of that, but now the model and its inputs/outputs become the primary targets.</span></p><p><span style="font-weight: 400;">Customers are seeing this every day. It’s a rapid escalation from theoretical risk to real-world threat. You’ve got the familiar problems, sure. Misconfigurations, for example, they’re still everywhere in the cloud. Stolen credentials, those will never go away. But they are being joined by a sinister new crop of attacks that target the very intelligence layer of the application.</span></p><p><span style="font-weight: 400;">We’re seeing an increase in reports of model theft and, more commonly, prompt-injection attacks. Most of the time, these aren’t complex zero-day exploits. Prompt injection is simple. It’s essentially tricking the LLM into bypassing its safety guardrails or performing unintended actions by feeding it a clever prompt. The chatbot in your e-commerce application is suddenly leaking sensitive context or responding in “ducky language,” as one demo showed, because an attacker injected malicious content into the data it was trained on or used to query. It’s a way to manipulate the application’s brain.</span></p><p><span style="font-weight: 400;"><iframe title="YouTube video player" src="https://www.youtube.com/embed/l2dGy3JlwRk?si=RomZ_SyNLFGxDOBv" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></span></p><p><span style="font-weight: 400;">And then you have model corruption. If an attacker can manipulate the training data or the fine-tuning process, they can intentionally introduce bias or error, compromising the integrity of the model’s decisions over time. Think of the implications for finance, healthcare, or logistics. That’s a fundamental vulnerability, and it changes the architecture of security itself.</span></p><h3><strong>You Can’t Outrun the Classics</strong></h3><p><span style="font-weight: 400;">You might feel that new AI threats are the only thing that matters now. But you’d be wrong. The classic, established application attacks are simply finding new, more effective delivery mechanisms through the AI interface.</span></p><p><span style="font-weight: 400;">Take the example of SQL injection (SQLi). It’s an old-school attack. We’ve had Web Application Firewalls (WAFs) for decades to detect and block SQL injection attempts at the perimeter. But when that SQL code is passed through an AI chatbot interface via something like the Model Context Protocol (MCP), the WAF has a harder time classifying it as purely malicious. The attacker can use the chatbot as a proxy to inject their code, bypassing traditional defenses that only look for common patterns in standard HTTP requests. It’s a clever way to leverage the trusted path between the chatbot and the backend.</span></p><p><span style="font-weight: 400;">Another one that keeps coming up is Server-Side Request Forgery, or SSRF. This is the method attackers use to get the application to make a request on their behalf to internal resources it shouldn’t have access to. In a cloud environment, that’s disastrous. An attacker uses SSRF to access the AWS metadata service, steal temporary credentials, and boom—they have the keys to the kingdom. They can enumerate resources, steal data, or escalate their access. The demos showed this clearly: simple prompt injection escalating into an SSRF attack, leading to credential theft and model manipulation.</span></p><p><span style="font-weight: 400;">It shows we need an integrated approach. You can’t just buy an “AI security box.”</span></p><h3><strong>The Defense Has To Be Layered and Automated</strong></h3><p><span style="font-weight: 400;">Protecting these dynamic, interconnected AI workloads demands a layered security posture that enforces zero-trust access, scans for vulnerabilities throughout the workload lifecycle, and provides intelligent web and API protection. It begins with securing access. Zero-trust must be enforced at every connection point, whether it’s north-south traffic entering the environment or crucial east-west traffic between microservices and the LLM.</span></p><p><span style="font-weight: 400;">Next, you need specialized tools working together. A solution like FortiWeb, which acts as a WAF and API protector, is essential here. Its machine learning capabilities learn normal traffic patterns and expected API behavior, enabling it to spot anomalies that a traditional, signature-based WAF would miss. It can sanitize user input specifically for the LLM to counter prompt injection and address those infamous OWASP Top 10 LLM threats.</span></p><p><span style="font-weight: 400;">But prevention isn’t enough. You need continuous vigilance. FortiCNAP fits into the environment to provide vulnerability scanning throughout the AI workload lifecycle. It monitors API calls, detects malicious activity based on things like IP geolocation, and flags misconfigured roles with excessive entitlements, the kind of problem that makes an SSRF attack fatal.</span></p><p><span style="font-weight: 400;"><iframe title="YouTube video player" src="https://www.youtube.com/embed/Ro_oeL1CrKU?si=b0VqP_dZ2thFgjSw" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></span></p><p><span style="font-weight: 400;">The final layer is the response. Having excellent detection is useless if your response is manual and slow. This is where automation platforms like FortiSOAR come in. When malicious activity is detected, say, a strange API call or the identification of a malicious file in an S3 bucket, FortiSOAR can automatically initiate a workflow. It can clean the malicious file, block the attacker’s IP address, and, most importantly, revoke those stolen temporary credentials. </span></p><p><span style="font-weight: 400;">The cloud is the common place to run modern AI applications efficiently, but that convenience comes with an obligation. You must adopt a security strategy that treats the AI model itself as a primary, exposed target. It’s about combining foundational security hygiene principles, such as proper input validation and role entitlement management, with machine-learning-powered protection layers and automated, rapid response. You can’t afford to wait. The threat landscape has already changed.</span></p><p><span style="font-weight: 400;">Watch the full </span><a href="https://techfieldday.com/appearance/fortinet-presents-at-cloud-field-day-24/"><span style="font-weight: 400;">Fortinet presentation at Cloud Field Day</span></a><span style="font-weight: 400;">, or catch up with </span><a href="https://techfieldday.com/"><span style="font-weight: 400;">all the events on the Tech Field Day website</span></a><span style="font-weight: 400;">.</span></p><div class="spu-placeholder" style="display:none"></div><div class="addtoany_share_save_container addtoany_content addtoany_content_bottom"><div class="a2a_kit a2a_kit_size_20 addtoany_list" data-a2a-url="https://securityboulevard.com/2025/11/the-attack-surface-of-cloud-based-generative-ai-applications-is-evolving/" data-a2a-title="The Attack Surface of Cloud-Based Generative AI Applications is Evolving"><a class="a2a_button_twitter" href="https://www.addtoany.com/add_to/twitter?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2025%2F11%2Fthe-attack-surface-of-cloud-based-generative-ai-applications-is-evolving%2F&amp;linkname=The%20Attack%20Surface%20of%20Cloud-Based%20Generative%20AI%20Applications%20is%20Evolving" title="Twitter" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_linkedin" href="https://www.addtoany.com/add_to/linkedin?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2025%2F11%2Fthe-attack-surface-of-cloud-based-generative-ai-applications-is-evolving%2F&amp;linkname=The%20Attack%20Surface%20of%20Cloud-Based%20Generative%20AI%20Applications%20is%20Evolving" title="LinkedIn" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_facebook" href="https://www.addtoany.com/add_to/facebook?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2025%2F11%2Fthe-attack-surface-of-cloud-based-generative-ai-applications-is-evolving%2F&amp;linkname=The%20Attack%20Surface%20of%20Cloud-Based%20Generative%20AI%20Applications%20is%20Evolving" title="Facebook" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_reddit" href="https://www.addtoany.com/add_to/reddit?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2025%2F11%2Fthe-attack-surface-of-cloud-based-generative-ai-applications-is-evolving%2F&amp;linkname=The%20Attack%20Surface%20of%20Cloud-Based%20Generative%20AI%20Applications%20is%20Evolving" title="Reddit" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_email" href="https://www.addtoany.com/add_to/email?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2025%2F11%2Fthe-attack-surface-of-cloud-based-generative-ai-applications-is-evolving%2F&amp;linkname=The%20Attack%20Surface%20of%20Cloud-Based%20Generative%20AI%20Applications%20is%20Evolving" title="Email" rel="nofollow noopener" target="_blank"></a><a class="a2a_dd addtoany_share_save addtoany_share" href="https://www.addtoany.com/share"></a></div></div>