News

Unauthorized Users Reportedly Gain Access to Anthropic’s Mythos AI Model

  • Jeffrey Burt--securityboulevard.com
  • published date: 2026-04-22 00:00:00 UTC

None

<p>A group of unauthorized users reportedly has gained access to Anthropic’s controversial Claude Mythos Preview AI frontier model despite the AI vendor’s efforts to keep it out of public hands by limiting the organizations that can use it.</p><p><a href="https://www.bloomberg.com/news/articles/2026-04-21/anthropic-s-mythos-model-is-being-accessed-by-unauthorized-users" target="_blank" rel="noopener">Bloomberg reported</a> that the unnamed group had tried multiple ways to gain access to the AI model since it was first announced earlier this month, and finally was able to get through via a third-party vendor. The users, who accessed Mythos on the day it was announced, are part of a Discord online forum group known to search for information about unreleased AI models.</p><p>According to the report, the group, using knowledge it had about a format Anthropic had used for other models, “made an education guess about [Mythos’] online location.” A person inside the group that Bloomberg communicated with told the news outlet that they were “interested in playing around with new models, not wreaking havoc with them.”</p><p>In a <a href="https://techcrunch.com/2026/04/21/unauthorized-group-has-gained-access-to-anthropics-exclusive-cyber-tool-mythos-report-claims/" target="_blank" rel="noopener">statement</a> to TechCrunch, an Anthropic spokesperson said the company was investigating the claim of unauthorized access to Mythos through a third-party vendor, and that the company has not found indications that the group’s activities have effected its systems.</p><h3>Mythos’ Ongoing Ripple Effect</h3><p>Anthropic’s <a href="https://securityboulevard.com/2026/04/anthropic-unveils-restricted-ai-cyber-model-in-unprecedented-industry-alliance/" target="_blank" rel="noopener">announcement</a> of Mythos April 7 sent shockwaves through the cybersecurity industry. The vendor described a frontier model that is significantly better than any other developed at detecting and identifying software vulnerabilities, noting that in tests, Mythos was able to find a security flaw that had been present yet undetected for 27 years.</p><p>However, the model also is <a href="https://www.anthropic.com/glasswing" target="_blank" rel="noopener">very good at creating exploits</a> for the vulnerabilities, which convinced Anthropic executives to limit the release of Mythos to a select group of organizations that will use them to create stronger defenses as part of the AI vendor’s new <a href="https://red.anthropic.com/2026/mythos-preview/" target="_blank" rel="noopener">Project Glasswing</a>.</p><p>OpenAI a week later followed a similar path with the <a href="https://securityboulevard.com/2026/04/openai-follows-anthropic-in-limiting-access-to-its-cyber-focused-model/" target="_blank" rel="noopener">unveiling of GPT-5.4-Cyber</a>, a frontier model focused on cybersecurity that the vendor also designated for particular users, though granting access to more organizations and individuals than Anthropic.</p><p>The introduction of Mythos ignited debates about everything from cybersecurity as such autonomous AI models come into play to what organizations need to do to secure their IT environments to whether Mythos’ capabilities are unique.</p><h3>Speed is the Difference</h3><p>However, enterprises and their security teams need to pay attention, according to Brian Fox, co-founder and CTO of Sonatype, which provides a software supply chain management platform.</p><p>“If the early reporting is right, Mythos could be a watershed moment,” Fox said. “What is not new is the reality it is forcing people to confront. Beneath the AI framing sits the same software supply chain reality we have been discussing for years: dependencies, build pipelines, third-party software, and infrastructure remain the attack surface.”</p><p>Fox added that “what changed is speed. AI can now find and operationalize weaknesses across that stack faster than most organizations can inventory, prioritize, and patch them. What we are seeing in response to the Mythos news is many organizations coming to terms with a reality that has existed for a long time: they are not actually in control of their software supply chains.”</p><h3>Addressing the Threats</h3><p>Tech vendors are beginning to roll out offerings aimed at helping organizations deal with the cyber risks posed by such frontier models. IBM Consulting last week <a href="https://securityboulevard.com/2026/04/new-ibm-security-services-aim-to-counter-risks-of-frontier-ai-models/" target="_blank" rel="noopener">introduced IBM Autonomous Security</a>, a collection of specialized agents created to make enterprises’ often sprawling security stacks work a more unified and coordinated fashion and creating what the vendor called “a systemic defense” that is needed to address the autonomous and fast-moving threats from such models.</p><p>At the same time, IBM is offering a new service for assessing a company’s security weaknesses and responding to them.</p><p>Likewise, Palo Alto Networks launched <a href="https://www.paloaltonetworks.com/blog/2026/04/introducing-unit-42-frontier-ai-defense/" target="_blank" rel="noopener">Unit 42 Frontier AI Defense</a>, an offering that uses AI models to help organizations “identify and validate the exposures most likely to be chained into real attacks before attackers weaponize them,” with Sam Rubin, senior vice president of consulting and threat intelligence at Unit 42, writing that “frontier AI is changing what is possible for attackers. In the hands of defenders, it can become a decisive advantage.”</p><h3>What Publicly Available Models Can Do</h3><p>Mythos and GPT-5.4-Cyber have garnered much of the attention about the cybersecurity risks such frontier models represent. However, some security vendors wrote that they tested publicly available AI models and found that many of them came close to or matched Mythos’ ability to find and identify zero-day vulnerabilities.</p><p>Executives with startup Aisle, which offers an AI-native app security platform, <a href="https://aisle.com/blog/ai-cybersecurity-after-mythos-the-jagged-frontier" target="_blank" rel="noopener">wrote</a> that over the past year, they had built an AI system for discovering, validating, and patching zero-days in open source software. In tests, they “took the specific vulnerabilities Anthropic showcases in their announcement, isolated the relevant code, and ran them through small, cheap, open-weights models. Those models recovered much of the same analysis.”</p><p>The models included GPT-OSS-120b, DeepSeek R1, Qwen3, and Gemma 4. The results varied depending on the model and the task, they wrote.</p><h3>The Real Story</h3><p>Researchers with Vidoc Security Lab, another AI-based cybersecurity startup, <a href="https://blog.vidocsecurity.com/blog/we-reproduced-anthropics-mythos-findings-with-public-models" target="_blank" rel="noopener">wrote</a> that they came up with similar results with OpenAI’s GPT-5.4 and Anthropic’s Claude Opus 4.6 models running OpenCode, an open source AI coding agent, scanning for security flaws in open software like OpenBSD and FFmpeg.</p><p>“If public models can already do useful work inside that kind of workflow, then the story is not ‘Anthropic has a magical cyber artifact,’” they wrote. “The story is that serious AI-assisted vulnerability research is no longer confined to a single frontier lab. That does not make the workflow easy. It means the moat is moving up the stack, from model access to validation, prioritization, and remediation.”</p><div class="spu-placeholder" style="display:none"></div><div class="addtoany_share_save_container addtoany_content addtoany_content_bottom"><div class="a2a_kit a2a_kit_size_20 addtoany_list" data-a2a-url="https://securityboulevard.com/2026/04/unauthorized-users-reportedly-gain-access-to-anthropics-mythos-ai-model/" data-a2a-title="Unauthorized Users Reportedly Gain Access to Anthropic’s Mythos AI Model"><a class="a2a_button_twitter" href="https://www.addtoany.com/add_to/twitter?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F04%2Funauthorized-users-reportedly-gain-access-to-anthropics-mythos-ai-model%2F&amp;linkname=Unauthorized%20Users%20Reportedly%20Gain%20Access%20to%20Anthropic%E2%80%99s%20Mythos%20AI%20Model" title="Twitter" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_linkedin" href="https://www.addtoany.com/add_to/linkedin?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F04%2Funauthorized-users-reportedly-gain-access-to-anthropics-mythos-ai-model%2F&amp;linkname=Unauthorized%20Users%20Reportedly%20Gain%20Access%20to%20Anthropic%E2%80%99s%20Mythos%20AI%20Model" title="LinkedIn" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_facebook" href="https://www.addtoany.com/add_to/facebook?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F04%2Funauthorized-users-reportedly-gain-access-to-anthropics-mythos-ai-model%2F&amp;linkname=Unauthorized%20Users%20Reportedly%20Gain%20Access%20to%20Anthropic%E2%80%99s%20Mythos%20AI%20Model" title="Facebook" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_reddit" href="https://www.addtoany.com/add_to/reddit?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F04%2Funauthorized-users-reportedly-gain-access-to-anthropics-mythos-ai-model%2F&amp;linkname=Unauthorized%20Users%20Reportedly%20Gain%20Access%20to%20Anthropic%E2%80%99s%20Mythos%20AI%20Model" title="Reddit" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_email" href="https://www.addtoany.com/add_to/email?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F04%2Funauthorized-users-reportedly-gain-access-to-anthropics-mythos-ai-model%2F&amp;linkname=Unauthorized%20Users%20Reportedly%20Gain%20Access%20to%20Anthropic%E2%80%99s%20Mythos%20AI%20Model" title="Email" rel="nofollow noopener" target="_blank"></a><a class="a2a_dd addtoany_share_save addtoany_share" href="https://www.addtoany.com/share"></a></div></div>