CSA Study: Mature AI Governance Translates Into Responsible AI Adoption
None
<p><span style="font-weight: 400;">Before you dismiss AI governance as too difficult or out of reach, consider new research from the Cloud Security Alliance that found AI governance to be the “maturity multiplier” that drives responsible AI adoption.</span></p><p><span style="font-weight: 400;">Responsible and AI adoption. Those are words we’d all like to see coupled more frequently. </span></p><p><span style="font-weight: 400;">“AI governance is the strongest predictor of AI readiness. Mature programs correlate to higher confidence, increased staff training, and more responsible innovation,” CSA said on releasing the report. “It also highlights a meaningful shift: Security teams have become early adopters of AI,” which they use for important actions like threat detection, red teaming, automation and incident response.</span></p><p><span style="font-weight: 400;">The survey, commissioned by Google Cloud, shows “a clear divide: organizations with established AI governance are accelerating adoption with confidence, while the rest are moving quickly but without the structures needed to manage emerging risk.”</span></p><p><span style="font-weight: 400;">This year’s survey found that security leaders are “working to secure AI systems even as they begin using AI to strengthen security itself.” With the market “evolving at remarkable speed,” governance is increasingly becoming “the foundation that determines whether adoption advances responsibly or outpaces an organization’s ability to manage it.”</span></p><p><span style="font-weight: 400;">The research shows that organizations across all sectors are embedding AI into their core operations and security workflows—54% use public frontier LLMs and 60% plan to use agentic AI within 12 months—while governance lags. Only 26% have comprehensive AI governance policies in place. And concern over security issues runs high, with 53% pointing to sensitive data exposure as the chief security risk.</span></p><p><span style="font-weight: 400;">“As organizations increasingly embed AI tools and agentic systems into their workflows, they must develop governance structures that can keep pace with the complexity and continued innovation of these technologies,” says Nicole Carignan, senior vice president, security and AI strategy, and field CISO at Darktrace.</span></p><p><span style="font-weight: 400;">“Day-to-day AI safety comes from disciplined oversight that reduces unnecessary risk and prevents harm,” says Noma Security CISO Diane Kelley. </span></p><p><span style="font-weight: 400;">Noting that there is no one-size-fits-all approach, Carignan says, “each organization must tailor its AI policies based on its unique risk profile, use cases, and regulatory requirements.” For that to happen, “executive leadership for AI governance is essential, whether the organization is building AI internally or adopting external solutions.”</span></p><p><span style="font-weight: 400;">The CSA report found that executive enthusiasm for AI was high, but “most respondents (72%) were either not confident or neutral in their organization’s ability to secure it.” Seven in 10 respondents “report moderate to full leadership awareness of AI security implications,” exposing a gap that “underscores the need for deeper governance, education, and cross-functional collaboration.”</span></p><p><span style="font-weight: 400;">Those organizations that haven’t prioritized governance, shouldn’t hesitate to do so now. Organizations that have formal governance in place are twice as likely to adopt agentic AI. They are also three times more likely to train their staff on AI security tools and have double the confidence that they can protect their AI systems. “</span><span style="font-weight: 400;">This reinforces governance as the foundation for responsible innovation—and a practical countermeasure to “shadow AI,” the report said.</span></p><p><span style="font-weight: 400;">Breaking with past precedent, security has become an early adopter of AI, with more than 90% testing or planning to test it, which the report says highlights “the urgency and opportunity to embed AI into security from the outset.” Perhaps not surprisingly, since AI ownership is diffuse and deployments are distributed across functions, security is taking the lead in protecting AI in just over half of organizations.</span></p><p><span style="font-weight: 400;">Even as organizations are using multiple LLMs (2.6 on average), the report found that they are consolidating around Gemini, Claude, GPT and LLaMA. “While this signals growing operational maturity, it also introduces new resilience, interoperability, and vendor lock-in concerns,” the study noted.</span></p><p><span style="font-weight: 400;">While data exposure is the top security concern among organizations, regulatory compliance is a close second at 50%, demonstrating that the focus remains on traditional issues rather than AI-specific threats like prompt injection and model drift. </span></p><p><span style="font-weight: 400;">“The largest concern I see today is the insatiable demand by cybercriminals to create persistence inside systems so they cannot easily be detected or evicted; knowing if this has occurred and getting proof that it not the case are two very different things, both of which will keep you up at night waiting for it to happen again,” says Dave Tyson, CIO at iCounter.</span></p><p><span style="font-weight: 400;">But Curtis Wilson, data scientist at Black Duck, says, “The greatest challenge facing AI adoption isn’t regulation—it’s trust.”</span></p><p><span style="font-weight: 400;">When people have confidence that AI systems are being developed responsibly, he says, “they’re more likely to use them.” AI developers “need interoperability,” he adds. </span></p><p><span style="font-weight: 400;">Since companies operating globally “are already navigating the EU AI Act,” he says, “The practical solution is to align U.S. federal and state regulations with established frameworks like NIST’s AI Risk Management Framework” to provide “genuine clarity while maintaining the protections people need.”</span></p><p><span style="font-weight: 400;">In 2026, security operations will move closer to “what FortiGuard Labs describes as machine-speed defense—a continuous process of intelligence, validation, and containment that compresses detection and response from hours to minutes,” says Derek Manky, chief security strategist and global vice president of threat intelligence with Fortinet’s FortiGuard Labs.</span></p><div class="spu-placeholder" style="display:none"></div><div class="addtoany_share_save_container addtoany_content addtoany_content_bottom"><div class="a2a_kit a2a_kit_size_20 addtoany_list" data-a2a-url="https://securityboulevard.com/2025/12/csa-study-mature-ai-governance-translates-into-responsible-ai-adoption/" data-a2a-title="CSA Study: Mature AI Governance Translates Into Responsible AI Adoption"><a class="a2a_button_twitter" href="https://www.addtoany.com/add_to/twitter?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2025%2F12%2Fcsa-study-mature-ai-governance-translates-into-responsible-ai-adoption%2F&linkname=CSA%20Study%3A%20Mature%20AI%20Governance%20Translates%20Into%20Responsible%20AI%20Adoption" title="Twitter" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_linkedin" href="https://www.addtoany.com/add_to/linkedin?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2025%2F12%2Fcsa-study-mature-ai-governance-translates-into-responsible-ai-adoption%2F&linkname=CSA%20Study%3A%20Mature%20AI%20Governance%20Translates%20Into%20Responsible%20AI%20Adoption" title="LinkedIn" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_facebook" href="https://www.addtoany.com/add_to/facebook?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2025%2F12%2Fcsa-study-mature-ai-governance-translates-into-responsible-ai-adoption%2F&linkname=CSA%20Study%3A%20Mature%20AI%20Governance%20Translates%20Into%20Responsible%20AI%20Adoption" title="Facebook" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_reddit" href="https://www.addtoany.com/add_to/reddit?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2025%2F12%2Fcsa-study-mature-ai-governance-translates-into-responsible-ai-adoption%2F&linkname=CSA%20Study%3A%20Mature%20AI%20Governance%20Translates%20Into%20Responsible%20AI%20Adoption" title="Reddit" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_email" href="https://www.addtoany.com/add_to/email?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2025%2F12%2Fcsa-study-mature-ai-governance-translates-into-responsible-ai-adoption%2F&linkname=CSA%20Study%3A%20Mature%20AI%20Governance%20Translates%20Into%20Responsible%20AI%20Adoption" title="Email" rel="nofollow noopener" target="_blank"></a><a class="a2a_dd addtoany_share_save addtoany_share" href="https://www.addtoany.com/share"></a></div></div>