Using FinOps to Detect AI-Created Security Risks
None
<p><span data-contrast="auto">Industry spending on artificial intelligence (AI) implementations continues to surge. Bain estimates that the </span><a href="https://www.bloomberg.com/news/articles/2024-09-25/ai-market-will-surge-to-near-1-trillion-by-2027-bain-says" target="_blank" rel="noopener"><span data-contrast="none">AI hardware market alone will grow to $1 trillion by 2027</span></a><span data-contrast="auto"> with 40–55% annual growth. Despite these massive investments, return on investment (ROI) remains elusive for many organizations. In fact, a recent study from MIT found that </span><a href="https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf" target="_blank" rel="noopener"><span data-contrast="none">95% of organizations have seen zero ROI</span></a><span data-contrast="auto"> from their generative AI (GenAI) projects.</span><span data-ccp-props="{}"> </span></p><p><span data-contrast="auto">AI clearly demonstrates great potential, providing unmatched capabilities in data analysis, automation and decision-making at scale. Nonetheless, the momentum toward AI adoption brings considerable security challenges that organizations are just starting to grasp. Often, these risks first emerge through sudden increases in cloud infrastructure expenses.</span><span data-ccp-props="{}"> </span></p><div class="code-block code-block-13" style="margin: 8px 0; clear: both;"> <style> .ai-rotate {position: relative;} .ai-rotate-hidden {visibility: hidden;} .ai-rotate-hidden-2 {position: absolute; top: 0; left: 0; width: 100%; height: 100%;} .ai-list-data, .ai-ip-data, .ai-filter-check, .ai-fallback, .ai-list-block, .ai-list-block-ip, .ai-list-block-filter {visibility: hidden; position: absolute; width: 50%; height: 1px; top: -1000px; z-index: -9999; margin: 0px!important;} .ai-list-data, .ai-ip-data, .ai-filter-check, .ai-fallback {min-width: 1px;} </style> <div class="ai-rotate ai-unprocessed ai-timed-rotation ai-13-1" data-info="WyIxMy0xIiwxXQ==" style="position: relative;"> <div class="ai-rotate-option" style="visibility: hidden;" data-index="1" data-name="U2hvcnQ=" data-time="MTA="> <div class="custom-ad"> <div style="margin: auto; text-align: center;"><a href="https://www.techstrongevents.com/cruisecon-virtual-west-2025/home?ref=in-article-ad-2&utm_source=sb&utm_medium=referral&utm_campaign=in-article-ad-2" target="_blank"><img src="https://securityboulevard.com/wp-content/uploads/2025/10/Banner-770x330-social-1.png" alt="Cruise Con 2025"></a></div> <div class="clear-custom-ad"></div> </div></div> </div> </div><p><span data-contrast="auto"><a href="https://securityboulevard.com/2025/11/survey-cybersecurity-leaders-much-more-concerned-about-ai-generated-code/" target="_blank" rel="noopener">Artificial intelligence implementations are creating new security loopholes</a> and vulnerabilities that traditional security frameworks weren’t designed to address. These include adversarial attacks that manipulate AI decision-making, data poisoning that corrupts training datasets and attacks on machine learning models that exploit algorithmic weaknesses.</span><span data-ccp-props="{}"> </span></p><p><span data-contrast="auto">AI systems, particularly those using machine learning (ML), analyze large amounts of data to generate predictions and automate decisions. As ML systems integrate more deeply into IT infrastructure, their vulnerabilities present new attack opportunities for malicious actors. The complexity of these systems can conceal the origins of security signals, making threats more difficult to identify with standard monitoring methods.</span><span data-ccp-props="{}"> </span></p><p><span data-contrast="auto">The competitive landscape has created a ‘must-have AI’ perception that’s driving organizations to deploy AI projects in increasingly haphazard ways. As they rush to keep up with competitors, companies are implementing AI solutions without adequate security controls or cost oversight. These rapid, poorly planned deployments create security loopholes that organizations later scramble to address.</span><span data-ccp-props="{}"> </span></p><h3><span data-contrast="auto">Security and FinOps: An Unlikely Partnership</span><span data-ccp-props="{}"> </span></h3><p><span data-contrast="auto">Thankfully, IT has an unexpected ally in identifying AI-related security issues — cost optimization tools. While security flaws may remain elusive and difficult to find, the financial impact of security threats — whether through resource hijacking, unauthorized usage or system inefficiencies — always shows up in cloud billing data.</span><span data-ccp-props="{}"> </span></p><p><span data-contrast="auto">As a result, FinOps and security teams can work together to address AI risks. Identity management systems help teams identify workloads from both perspectives: Security teams can clearly see who is doing what, while FinOps teams can track where money is being spent. This dual visibility creates a comprehensive view of potential issues.</span><span data-ccp-props="{}"> </span></p><p><span data-contrast="auto">A recent example illustrates this principle in action. A company’s IT team noticed significant BigQuery cost overruns without any obvious cause. A subsequent investigation discovered that a security breach was to blame. From a security perspective, this situation could have been prevented if security controls had been layered in during implementation rather than added as an afterthought. Similarly, if FinOps practices had been implemented with the same intentionality as security measures, the cost anomalies would have been caught earlier.</span><span data-ccp-props="{}"> </span></p><h3><span data-contrast="auto">The Need for Intentional Implementation</span><span data-ccp-props="{}"> </span></h3><p><span data-contrast="auto">The competitive pressure to innovate quickly and reach market leadership positions creates a situation where organizations trying to reach the ‘upper right quadrant’ also find themselves dangerously close to the edge where they could fall off altogether. The rush for innovation often leads to organizations bypassing critical security and cost controls.</span><span data-ccp-props="{}"> </span></p><p><span data-contrast="auto">At the speed of current innovation cycles, IT teams are forced to make changes without receiving adequate visibility or testing. Later, when something breaks, IT loses the trust of customers and internal stakeholders alike, which puts future AI projects at risk.</span><span data-ccp-props="{}"> </span></p><p><span data-contrast="auto">To avoid this situation, organizations should take intentional pauses during AI implementation to align security measures with cost optimization practices. This approach isn’t adopted nearly enough, despite its critical importance for long-term success.</span><span data-ccp-props="{}"> </span></p><h3><span data-contrast="auto">The Path Forward: Contextual Awareness</span><span data-ccp-props="{}"> </span></h3><p><span data-contrast="auto">Modern FinOps evolution focuses on increasing not just visibility into cloud costs, but the contextual awareness of those costs. This contextual understanding becomes crucial when identifying AI-related security risks, as unusual spending patterns often indicate underlying security issues.</span><span data-ccp-props="{}"> </span></p><p><span data-contrast="auto">The goal should be to develop a comprehensive view of infrastructure and spending, which AI tools can turn into actionable insights for decision-makers. For organizations implementing AI systems, this means establishing FinOps practices that can trace costs back to specific AI workloads and processes. When a customer interaction triggers an AI system, organizations should be able to trace that back to a reasonable estimate of the cloud costs involved in completing that transaction.</span><span data-ccp-props="{}"> </span></p><h3><span data-contrast="auto">Building Sustainable AI Security</span><span data-ccp-props="{}"> </span></h3><p><span data-contrast="auto">Rather than rushing the implementation of AI solutions, organizations should adopt a crawl, walk and run strategy. This means:</span><span data-ccp-props="{}"> </span></p><ul><li aria-setsize="-1" data-leveltext="●" data-font="" data-listid="1" data-list-defn-props='{"335552541":1,"335559685":720,"335559991":360,"469769242":[8226],"469777803":"left","469777804":"●","469777815":"multilevel"}' data-aria-posinset="1" data-aria-level="1"><span data-contrast="auto">Start with proper instrumentation and labeling of AI workloads.</span><span data-ccp-props="{}"> </span></li></ul><ul><li aria-setsize="-1" data-leveltext="○" data-font="" data-listid="1" data-list-defn-props='{"335552541":1,"335559685":1440,"335559991":360,"469769242":[9675],"469777803":"left","469777804":"○","469777815":"multilevel"}' data-aria-posinset="1" data-aria-level="2"><span data-contrast="auto">This can be via third-party libraries</span><span data-ccp-props="{}"> </span></li></ul><ul><li aria-setsize="-1" data-leveltext="●" data-font="" data-listid="1" data-list-defn-props='{"335552541":1,"335559685":720,"335559991":360,"469769242":[8226],"469777803":"left","469777804":"●","469777815":"multilevel"}' data-aria-posinset="2" data-aria-level="1"><span data-contrast="auto">Establish cost baselines for AI operations</span><span data-ccp-props="{}"> </span></li></ul><ul><li aria-setsize="-1" data-leveltext="●" data-font="" data-listid="1" data-list-defn-props='{"335552541":1,"335559685":720,"335559991":360,"469769242":[8226],"469777803":"left","469777804":"●","469777815":"multilevel"}' data-aria-posinset="3" data-aria-level="1"><span data-contrast="auto">Implement monitoring systems that can detect anomalous spending patterns</span><span data-ccp-props="{}"> </span></li></ul><ul><li aria-setsize="-1" data-leveltext="●" data-font="" data-listid="1" data-list-defn-props='{"335552541":1,"335559685":720,"335559991":360,"469769242":[8226],"469777803":"left","469777804":"●","469777815":"multilevel"}' data-aria-posinset="4" data-aria-level="1"><span data-contrast="auto">Create continuous feedback loops between SecOps and FinOps teams</span><span data-ccp-props="{}"> </span></li></ul><p><span data-contrast="auto">The most successful organizations won’t be the ones that are the quickest to implement AI, but those that do so most sustainably. By viewing cost optimization tools as security allies and deploying AI systems with appropriate financial oversight, organizations can identify and manage security risks early, preventing them from escalating into major incidents.</span></p><p><span data-contrast="auto">As AI advances past the current ‘illusion of efficiency’, organizations with solid foundational practices will be better equipped to expand their AI initiatives securely and cost-effectively. It’s crucial to understand that in the cloud era, security and financial stability are becoming more interconnected, so monitoring one can offer valuable insights into the other.</span><span data-ccp-props="{}"> </span></p><p><span data-contrast="auto">The worst mistake organizations can make is waiting for perfect tools or a complete understanding before starting these practices. The time to begin integrating FinOps with AI security practices is now, while building the contextual awareness needed to manage both costs and risks effectively.</span><span data-ccp-props="{}"> </span></p><div class="spu-placeholder" style="display:none"></div><div class="addtoany_share_save_container addtoany_content addtoany_content_bottom"><div class="a2a_kit a2a_kit_size_20 addtoany_list" data-a2a-url="https://securityboulevard.com/2025/11/using-finops-to-detect-ai-created-security-risks/" data-a2a-title="Using FinOps to Detect AI-Created Security Risks "><a class="a2a_button_twitter" href="https://www.addtoany.com/add_to/twitter?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2025%2F11%2Fusing-finops-to-detect-ai-created-security-risks%2F&linkname=Using%20FinOps%20to%20Detect%20AI-Created%20Security%20Risks%C2%A0" title="Twitter" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_linkedin" href="https://www.addtoany.com/add_to/linkedin?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2025%2F11%2Fusing-finops-to-detect-ai-created-security-risks%2F&linkname=Using%20FinOps%20to%20Detect%20AI-Created%20Security%20Risks%C2%A0" title="LinkedIn" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_facebook" href="https://www.addtoany.com/add_to/facebook?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2025%2F11%2Fusing-finops-to-detect-ai-created-security-risks%2F&linkname=Using%20FinOps%20to%20Detect%20AI-Created%20Security%20Risks%C2%A0" title="Facebook" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_reddit" href="https://www.addtoany.com/add_to/reddit?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2025%2F11%2Fusing-finops-to-detect-ai-created-security-risks%2F&linkname=Using%20FinOps%20to%20Detect%20AI-Created%20Security%20Risks%C2%A0" title="Reddit" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_email" href="https://www.addtoany.com/add_to/email?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2025%2F11%2Fusing-finops-to-detect-ai-created-security-risks%2F&linkname=Using%20FinOps%20to%20Detect%20AI-Created%20Security%20Risks%C2%A0" title="Email" rel="nofollow noopener" target="_blank"></a><a class="a2a_dd addtoany_share_save addtoany_share" href="https://www.addtoany.com/share"></a></div></div>