News

A Polycrisis of AI Cyberattacks is Approaching. Are You Breach Ready Yet?

  • None--securityboulevard.com
  • published date: 2025-11-17 00:00:00 UTC

None

<p>Unless you have been living under a rock in the past few days, you would have seen that cybersecurity headlines have been overshadowed by reports that hackers fooled artificial intelligence agents into automating break-ins into<a href="https://www.wsj.com/tech/ai/china-hackers-ai-cyberattacks-anthropic-41d7ce76" rel="noreferrer noopener"> major corporations</a>.</p><p>Anthropic, the makers of the artificial intelligence (AI) chatbot Claude, claim to run an investigation into how an AI-orchestrated cyber-espionage campaign (GTG-1002) sponsored by the Chinese government tricked their LLM tool, Claude, into serving as the primary execution engine, performing automated reconnaissance, vulnerability discovery, exploitation, credential harvesting, lateral movement, and exfiltration at scale with only light human supervision. Anthropic’s investigation describes a watershed event, in which attacker-supplied agentic instances of <em>Claude</em> were tricked into performing automated tasks under the guise of cybersecurity research to carry out automated cyberattacks against around 30 global organizations.</p><div class="code-block code-block-13" style="margin: 8px 0; clear: both;"> <style> .ai-rotate {position: relative;} .ai-rotate-hidden {visibility: hidden;} .ai-rotate-hidden-2 {position: absolute; top: 0; left: 0; width: 100%; height: 100%;} .ai-list-data, .ai-ip-data, .ai-filter-check, .ai-fallback, .ai-list-block, .ai-list-block-ip, .ai-list-block-filter {visibility: hidden; position: absolute; width: 50%; height: 1px; top: -1000px; z-index: -9999; margin: 0px!important;} .ai-list-data, .ai-ip-data, .ai-filter-check, .ai-fallback {min-width: 1px;} </style> <div class="ai-rotate ai-unprocessed ai-timed-rotation ai-13-1" data-info="WyIxMy0xIiwxXQ==" style="position: relative;"> <div class="ai-rotate-option" style="visibility: hidden;" data-index="1" data-name="U2hvcnQ=" data-time="MTA="> <div class="custom-ad"> <div style="margin: auto; text-align: center;"><a href="https://www.techstrongevents.com/cruisecon-virtual-west-2025/home?ref=in-article-ad-2&amp;utm_source=sb&amp;utm_medium=referral&amp;utm_campaign=in-article-ad-2" target="_blank"><img src="https://securityboulevard.com/wp-content/uploads/2025/10/Banner-770x330-social-1.png" alt="Cruise Con 2025"></a></div> <div class="clear-custom-ad"></div> </div></div> </div> </div><h2 class="wp-block-heading" id="h-clearly-existing-nbsp-cybersecurity-investments-were-inadequate-to-defend-against-nbsp-a-rogue-nbsp-ai-infiltration">Clearly, existing cybersecurity investments were inadequate to defend against a rogue AI infiltration.</h2><p>In their words… “<em>this campaign demonstrated </em><strong><em>unprecedented</em></strong><em> integration and autonomy of AI throughout the attack lifecycle, with the threat actor manipulating Claude Code to support reconnaissance, vulnerability discovery, exploitation, </em><strong><em>lateral movement, credential harvesting,</em></strong><em> data analysis, and exfiltration operations largely autonomously. The human operator tasked instances of Claude Code to operate in groups as autonomous penetration testing orchestrators and agents, with the threat actor able to leverage AI to execute 80–90% of tactical operations independently at physically impossible request rates.”</em></p><p>The malaise is not new; we have seen several such instances by human attackers, and now it is the turn of AI. It is no secret that, while the cybersecurity market is poised to reach half a trillion USD in 2025, attacks are continuing to rise rather than decline. And I am now convinced that the real issue is our reliance on capabilities to defeat attacks, while attackers try to bypass or overwhelm defenses.</p><p class="p-5 has-background" style="background-color:#e1f4f0"><strong>Are You Breach Ready?</strong> Uncover hidden lateral attack risks in just 5 days. Get a <a href="https://colortokens.com/breach-readiness-assessment/" rel="noreferrer noopener">free Breach Readiness Assessment </a>with a visual roadmap of what to fix first.</p><h2 class="wp-block-heading" id="h-ai-nbsp-cyberattacks-nbsp-are-not-new">AI Cyberattacks are Not New</h2><p>On September 6, 2025, EchoLeak (CVE-2025-32711) achieved full privilege escalation across LLM trust boundaries without user interaction, exploiting a zero-click prompt-injection vulnerability in Microsoft 365 Copilot to enable remote, unauthenticated data exfiltration via a single crafted email. Earlier in August, researchers exposed a weakness in OpenAI’s Connectors, which let you hook up ChatGPT to other services, that allowed them to extract data from a Google Drive without any user interaction.</p><p>While the world wonders about the power of AI, it comes down to two key areas. The speed at which it can navigate the complexities of cyber defense using existing cybersecurity tools. And the scale of such attempts. AI-based attacks can spread instantly if they manage to bypass tools designed to block initial access. The solution lies elsewhere. But to get there, let us go back to Anthropic’s findings:</p><p><em>GTG-1002 represents multiple firsts in AI-enabled threat actor capabilities. The actor achieved what we believe is the first documented case of a cyberattack largely executed without human intervention at scale — the AI </em><strong><em>autonomously discovered vulnerabilities </em></strong><em>in targets </em><strong><em>selected by human operators</em></strong><em> and successfully exploited them in live operations, then performed a wide range of post-exploitation activities from analysis, </em><strong><em>lateral movement, </em></strong><em>privilege escalation, data access, to data exfiltration. Most significantly, this marks the first documented case of agentic AI successfully obtaining access to confirmed high-value targets for intelligence collection, including major technology corporations and government agencies.”</em> </p><h2 class="wp-block-heading" id="h-the-issue-nbsp-is-fundamental">The Issue is Fundamental</h2><p>One doesn’t need to be super intelligent to conclude that, if there were no lateral movement possible, neither humans nor AI could navigate from initial access to data exfiltration. Two lessons jump out immediately: (1) <strong>credential theft + unfettered east-west access</strong> are the fastest path to high-value compromise, and (2) <strong>attacker behavior shows attempts to exploit access relationships</strong>, not just single hosts.</p><p>Combine this with the realization that we can never, ever successfully patch all vulnerabilities in time, and you have the call to action to combat the impending polycrisis of a human attacker exploiting any form of AI to launch a hitherto unforeseen, lightning-fast, hyperscale cyberattack. It is time to reframe the discipline of cybersecurity as a proactive, business-enabling strategy centered on breach readiness, rather than a reactive, prevention-focused discipline, shifting the focus from preventing every possible intrusion to preparing for the inevitable breach and ensuring uninterrupted business operations.</p><h2 class="wp-block-heading" id="h-step-1-embrace-microsegmentation-reduce-the-number-of-attack-paths-for-lateral-movement">Step 1: Embrace Microsegmentation. Reduce The Number of Attack Paths for Lateral Movement.</h2><p>Adopt a <a href="https://colortokens.com/microsegmentation/" rel="noreferrer noopener">microsegmentation strategy</a> immediately to narrow the attack path to the bare minimum, ensuring neither AI nor humans can find a path to attack unless explicitly allowed. Microsegmentation will also reduce the blast radius, thus exposing any attempts to move laterally immediately as malicious. Even if AI can generate perfect PowerShell scripts, RDP commands, or lateral movement logic, in a microsegmented world, network paths simply don’t exist. Exploring the network becomes noisy: every attempt outside the defined policy is logged and blocked, raising anomaly visibility.</p><p>Today, beginning a Zero Trust journey is swift and seamless. It is now possible to <a href="https://colortokens.com/report-download/edr-microsegmentation-breach-readiness/" rel="noreferrer noopener">leverage your existing EDR investments</a> to leapfrog your adoption from hours to enforcement in days instead of months. It is also possible to build incident response through breach containment by using a single platform across IT, OT, or Cloud to ensure pervasive governance of all critical systems.</p><p>Microsegmentation ensures that even with a foothold, the attacker’s AI cannot freely move or see the whole network; the blast radius is tiny.</p><p class="p-5 has-background" style="background-color:#e1f4f0"><strong>Access </strong><a href="https://colortokens.com/report/forrester-wave-microsegmentation/" rel="noreferrer noopener"><strong>Forrester Wave<img decoding="async" src="https://s.w.org/images/core/emoji/16.0.1/72x72/2122.png" alt="™" class="wp-smiley" style="height: 1em; max-height: 1em;"></strong></a><strong> Report </strong>| Discover why ColorTokens was rated ‘Superior’ in OT, IoT, and Healthcare Security.</p><h2 class="wp-block-heading" id="h-step-2-protect-valid-accounts-using-cryptographic-passwordless-credentials">Step 2: Protect Valid Accounts Using Cryptographic Passwordless Credentials.</h2><p>Beyond the obvious user-friendly experience of passwordless authentication systems, passwordless cryptographic credentials neutralize credential misuse, a central pillar of the AI-based campaigns. In fact, MITRE lists compromise of valid user credentials as one of the most prevalent techniques in modern cyberattacks. Passwordless ecosystems focus on cryptographic keys and attestations that are non-replayable outside the approved device/context — exactly what an automated agent needs to be denied access.</p><p>Introducing Zero Trust in credential management is not difficult. Once you move all critical admin and API authentication to cryptographic, device-bound credentials and short-lived, context-bound tokens, attempts to misuse valid accounts become extremely difficult. So go ahead and eliminate service accounts with static passwords, and use identity certificates with automatic rotation and strict trust boundaries built in. More, apply conditional authentication. Enforce device posture attestations and network segment provenance for sensitive operations. The only option for attackers is to take over an endpoint to try and attack as a trusted user who meets all conditions of the multi-factor authentication.</p><h2 class="wp-block-heading" id="h-step-3-use-ai-nbsp-to-lure-anomalous-nbsp-behaviors-nbsp-to-nbsp-decoys-where-they-can-be-trapped-nbsp-and-nbsp-evicted">Step 3: Use AI to Lure Anomalous Behaviors to Decoys, Where They Can Be Trapped and Evicted.</h2><p>AI-based deception creates high‑fidelity decoys (hosts, services, data, credentials) that appear real to an attacker but are instrumented traps. No legitimate user or process should ever touch certain decoy systems or honey credentials. Any such touch is essentially a confirmed incident. Deception forces AI agents or humans into observable interactions, resulting in large numbers of false positives for the attacker.</p><p>AI can continually adapt decoys, reconfigure, re-seed, and re-story the environment so that attacks by the enemy AI become more complex and uncertain, and they use up more computing because it has to test many more paths, generating more telemetry and alerts. For AI agents, it is devastating because it results in wasted AI cycles, hallucinations or validation failures, increased operator involvement, and detectable signatures. When combined with microsegmentation, this ensures that all the accessible “targets” are decoys, not crown jewels.</p><p>So even if the trusted user is malicious, the trusted AI, which navigated the Zero Trust credentials, will suddenly show up as malicious and get trapped in decoys.</p><h2 class="wp-block-heading" id="h-in-nbsp-summary">In Summary</h2><p>Enterprises that are microsegmented into zones, use cryptographic passwordless credentials, and AI-based deception usually have the knowledge of all possible conduits and how they can be disconnected by pressing a button identified, documented, and practiced by the relevant operational experts are far more likely to withstand advanced AI-based attacks.</p><p>But the time to act is now. If you are reading this, you still have a very small opportunity to be proactive before it is too late. <a href="https://www.google.com/url?sa=t&amp;rct=j&amp;opi=89978449&amp;url=https%3A%2F%2Fwww.darkreading.com%2Fapplication-security%2Fonly-250-documents-poison-any-ai-model&amp;ved=2ahUKEwjyzuuaj_mQAxX3w6ACHUCIPcAQFnoECBwQAQ&amp;usg=AOvVaw1SbMfk6J-717sNqcIf9MaR" rel="noreferrer noopener">It takes only 250 documents to poison any AI model</a>.</p><p>Don’t wait to fix asset management, patch management, configuration management, or change management. Don’t wait for the next audit. Go online. Begin by conducting a <a href="https://colortokens.com/breach-readiness-assessment/" rel="noreferrer noopener">breach readiness and impact assessment</a>. Start now and take the first step toward being breach ready.</p><p>The post <a href="https://colortokens.com/blogs/ai-cyberattacks-microsegmentation-anthropic-claude/">A Polycrisis of AI Cyberattacks is Approaching. Are You Breach Ready Yet?</a> appeared first on <a href="https://colortokens.com/">ColorTokens</a>.</p><div class="spu-placeholder" style="display:none"></div><div class="addtoany_share_save_container addtoany_content addtoany_content_bottom"><div class="a2a_kit a2a_kit_size_20 addtoany_list" data-a2a-url="https://securityboulevard.com/2025/11/a-polycrisis-of-ai-cyberattacks-is-approaching-are-you-breach-ready-yet/" data-a2a-title="A Polycrisis of AI Cyberattacks is Approaching. Are You Breach Ready Yet?"><a class="a2a_button_twitter" href="https://www.addtoany.com/add_to/twitter?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2025%2F11%2Fa-polycrisis-of-ai-cyberattacks-is-approaching-are-you-breach-ready-yet%2F&amp;linkname=A%C2%A0Polycrisis%C2%A0of%C2%A0AI%20Cyberattacks%C2%A0is%20Approaching.%20Are%20You%20Breach%20Ready%20Yet%3F" title="Twitter" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_linkedin" href="https://www.addtoany.com/add_to/linkedin?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2025%2F11%2Fa-polycrisis-of-ai-cyberattacks-is-approaching-are-you-breach-ready-yet%2F&amp;linkname=A%C2%A0Polycrisis%C2%A0of%C2%A0AI%20Cyberattacks%C2%A0is%20Approaching.%20Are%20You%20Breach%20Ready%20Yet%3F" title="LinkedIn" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_facebook" href="https://www.addtoany.com/add_to/facebook?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2025%2F11%2Fa-polycrisis-of-ai-cyberattacks-is-approaching-are-you-breach-ready-yet%2F&amp;linkname=A%C2%A0Polycrisis%C2%A0of%C2%A0AI%20Cyberattacks%C2%A0is%20Approaching.%20Are%20You%20Breach%20Ready%20Yet%3F" title="Facebook" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_reddit" href="https://www.addtoany.com/add_to/reddit?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2025%2F11%2Fa-polycrisis-of-ai-cyberattacks-is-approaching-are-you-breach-ready-yet%2F&amp;linkname=A%C2%A0Polycrisis%C2%A0of%C2%A0AI%20Cyberattacks%C2%A0is%20Approaching.%20Are%20You%20Breach%20Ready%20Yet%3F" title="Reddit" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_email" href="https://www.addtoany.com/add_to/email?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2025%2F11%2Fa-polycrisis-of-ai-cyberattacks-is-approaching-are-you-breach-ready-yet%2F&amp;linkname=A%C2%A0Polycrisis%C2%A0of%C2%A0AI%20Cyberattacks%C2%A0is%20Approaching.%20Are%20You%20Breach%20Ready%20Yet%3F" title="Email" rel="nofollow noopener" target="_blank"></a><a class="a2a_dd addtoany_share_save addtoany_share" href="https://www.addtoany.com/share"></a></div></div><p class="syndicated-attribution">*** This is a Security Bloggers Network syndicated blog from <a href="https://colortokens.com/">ColorTokens</a> authored by <a href="https://securityboulevard.com/author/0/" title="Read other posts by Agnidipta Sarkar">Agnidipta Sarkar</a>. Read the original post at: <a href="https://colortokens.com/blogs/ai-cyberattacks-microsegmentation-anthropic-claude/">https://colortokens.com/blogs/ai-cyberattacks-microsegmentation-anthropic-claude/</a> </p>