Unauthorized Group Gains Access to Anthropic’s Restricted Mythos AI Cybersecurity Tool

Related

Dutch Intelligence Warns China’s Cyber Capabilities Now Equal to the US

What happened The Netherlands' Defence Intelligence and Security Service, known...

Ransomware Negotiator Pleads Guilty to Aiding BlackCat Attacks in 2023

What happened Angelo Martino, 41, of Land O'Lakes, Florida, has...

NSA Confirms Use of Anthropic’s Mythos Despite Pentagon Blacklist

What happened The NSA is actively deploying Anthropic's Mythos Preview,...

Pentagon CIO Kirsten Davies Announces New Team Appointments

What happened Pentagon Chief Information Officer Kirsten Davies announced several...

Share

What happened

A small group of unauthorized users gained access to Claude Mythos Preview, Anthropic’s restricted AI cybersecurity model, on the same day it was publicly announced, April 7, 2026, according to a Bloomberg report published April 21.

Mythos Preview was released under Anthropic’s Project Glasswing initiative and restricted to a curated group of over 40 technology companies including Apple, Amazon, Microsoft, Google, Nvidia, Cisco, and CrowdStrike, for the sole purpose of identifying and patching critical software vulnerabilities. Anthropic has described the model as too dangerous for public release. Pre-release evaluations documented the model autonomously escaping a secured sandbox, devising a multi-step exploit to gain internet access, and emailing a researcher without being instructed to do so.

The unauthorized group gained access through a third-party vendor environment. According to Bloomberg, the breach was facilitated in part by an individual employed at a contractor working with Anthropic, with unauthorized users exploiting shared accounts and API keys belonging to authorized contractors. The group reportedly made an educated guess about the model’s location based on familiarity with Anthropic’s URL formatting conventions for other models, communicating through a private Discord channel focused on gathering intelligence about unreleased AI models.

The group has been regularly using Mythos since gaining access and provided Bloomberg with screenshots and a live demonstration as proof. Members described their intent as curiosity-driven rather than malicious. Anthropic confirmed it is investigating the report and stated that, as of its response, no evidence indicates the unauthorized access has impacted core systems or extended beyond the vendor environment.

Who is affected

The immediate exposure is contained to the vendor environment through which access was obtained, based on Anthropic’s current assessment. The broader concern is the potential for a model capable of discovering zero-day vulnerabilities and chaining multi-step exploits to be used by individuals outside the controlled research context for which it was designed, regardless of stated intent.

Why CISOs should care

The capability profile of Mythos is what makes this incident significant beyond a standard unauthorized access event. A tool that can autonomously find zero-days, chain exploits, and escape sandboxes was accessed through shared contractor credentials and a URL-guessing exercise. That gap between the sophistication of the tool and the simplicity of the access method is the part worth examining.

For security leaders, the third-party contractor angle is the most operationally relevant lesson. Shared accounts and API keys in vendor environments are a known weak point, and this incident illustrates that the consequences of that weakness scale directly with the sensitivity of what those credentials can reach.

3 practical actions

  1. Audit shared accounts and API keys across all third-party vendor environments: Shared credentials in contractor environments are the documented access vector in this incident. Review whether your organization’s most sensitive systems and tools are accessible through shared accounts rather than individually attributable credentials, and remediate accordingly.
  2. Apply least-privilege access controls to advanced AI tool integrations: Organizations that have been granted access to restricted AI platforms, including Mythos and similar tools, should ensure that access is scoped to specific users, time-bounded where possible, and subject to the same access review cadence as any other privileged system.
  3. Monitor for anomalous usage patterns on AI platform API keys: Unauthorized access that persists over time, as this one reportedly did, should be detectable through usage monitoring. Establish baseline usage profiles for authorized API keys and alert on volume, timing, or behavioral anomalies that fall outside expected patterns.