NSA Confirms Use of Anthropic’s Mythos Despite Pentagon Blacklist

Related

NSA Confirms Use of Anthropic’s Mythos Despite Pentagon Blacklist

What happened The NSA is actively deploying Anthropic's Mythos Preview,...

Dutch E-Commerce Site Bol.com Investigates Claims of a Data Breach

What happened A threat actor operating under the name "Jeffrey...

Bluesky Blames App Outage on ‘Sophisticated’ DDoS Attack

What happened Bluesky attributed a widespread service outage on April...

Share

What happened

The NSA is actively deploying Anthropic’s Mythos Preview, according to an Axios report published April 19, 2026, despite the Department of Defense having designated Anthropic a “Supply-Chain Risk to National Security” and directing all military contractors to cease commercial activity with the company.

The conflict traces back to a $200 million contract Anthropic signed with the DoD in July 2025, which included explicit restrictions prohibiting use of its AI for mass domestic surveillance and fully autonomous weapons systems. The arrangement broke down in January 2026 when Defense Secretary Hegseth issued a memo demanding “any lawful use” language across all DoD AI contracts, effectively requiring Anthropic to remove those safety restrictions. Anthropic refused. The Pentagon responded in late February by issuing the supply-chain risk designation, and President Trump separately ordered all federal agencies to halt use of Anthropic technology, with a six-month phase-out window for already-integrated systems.

Despite that sweeping ban, two sources cited by Axios said Mythos Preview is being used “more widely” within the NSA, with usage extending across other parts of the department. Anthropic has restricted Mythos access to approximately 40 organizations given concerns over its offensive cybersecurity capabilities, publicly acknowledging only 12 of them. The NSA is reportedly among the unnamed organizations with access. In the UK, the NSA’s counterparts are said to access Mythos through the AI Security Institute.

Anthropic filed suit in San Francisco in March 2026, calling the Pentagon’s supply-chain designation “unprecedented and unlawful” and alleging violations of free speech and due process. The case is ongoing. On April 17, Anthropic CEO Dario Amodei met with White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent to discuss government use of Mythos and a potential roadmap for broader federal adoption.

Who is affected

The contradiction sits squarely within the U.S. intelligence and defense community. Federal agencies operating under the presidential order to phase out Anthropic technology face an ambiguous compliance environment when one of the DoD’s own constituent agencies is actively running the most restricted version of the banned platform. Defense contractors and suppliers who were directed to cease commercial activity with Anthropic are also in an unclear position.

Why CISOs should care

This is a governance problem dressed up as a policy dispute. The NSA using a model that the Pentagon simultaneously argues poses a national security threat, while that same Pentagon fights Anthropic in court, is not a minor inconsistency. It signals that AI adoption at the operational level is outpacing the institutional frameworks meant to govern it, including within the most security-sensitive agencies in the world.

For security leaders, the broader lesson is about organizational coherence. When top-level policy and ground-level adoption diverge this sharply, oversight breaks down, accountability becomes ambiguous, and risk accumulates in the gaps. That dynamic is not unique to government.

3 practical actions

  1. Audit AI tool usage across all business units against your approved vendor list: The NSA-Anthropic situation is an extreme version of a common enterprise problem, where teams adopt AI tools operationally before governance catches up. Know what is actually running in your environment.
  2. Establish clear escalation paths when AI vendor relationships change status: If a vendor is sanctioned, flagged, or placed under review, there should be a defined process for identifying all active integrations and determining which require immediate action versus a managed transition.
  3. Track the Anthropic-Pentagon legal proceedings as a governance precedent: How courts and the executive branch resolve the tension between contractual AI safety restrictions and government “any lawful use” demands will shape how AI contracts across the federal supply chain are structured going forward.

For more news about cyber defense, governance, and operational resilience, click Cybersecurity to read more.