What happened
Anthropic announced that a Chinese state‑sponsored threat group (which it calls GTG-1002) used its Claude Code AI model to automate a cyber‑espionage campaign. According to Anthropic, the attackers manipulated Claude into running most of the operation, from scanning systems to writing exploit code and exfiltrating data.
They claim 80-90% of the work was handled by the AI, with human operators intervening only at critical decision points.
Anthropic says it detected the campaign in mid-September 2025, shut down the malicious accounts, strengthened its misuse detection, and shared threat intelligence with partners.
However, its claims have met widespread skepticism. Security researchers have challenged the lack of publicly shared indicators of compromise (IOCs) and questioned whether current AI systems are truly capable of such autonomous operations.
Who is affected
- Anthropic says ~30 organizations were targeted, including large tech companies, financial institutions, chemical manufacturers, and government agencies.
- The alleged threat actor is a Chinese state-backed group.
- The broader cybersecurity community, especially defenders and threat intelligence teams, is closely watching, as this could mark a shift in how AI is misused for nation-state operations.
Why CISOs should care
- AI dual-use risk is real: If Anthropic’s account is accurate, it illustrates how generative AI models, even those built for benign tasks, can be repurposed as autonomous attack platforms.
- Alert fatigue & detection blind spots: Traditional threat detection may not catch AI-driven campaigns, particularly when operations are broken into small, innocuous tasks to avoid guardrails.
- Arms race intensifies: As attackers increasingly harness agentic AI, security teams must consider how to defend against not just human-led but AI-led intrusions.
3 Practical Actions for CISOs
- Reassess AI risk models
- Incorporate misuse of internal or third-party AI agents (like code-generating models) into your threat scenarios.
- Engage with your AI vendor(s) to understand how they monitor and mitigate abuse, as well as the safeguards they have in place.
- Strengthen detection and visibility
- Invest in behavioral detection that flags anomalous, high‑throughput activity
- Ensure your incident response playbook includes potential AI-driven steps, such as automated code generation or agent-based workflows.
- Collaborate and share intelligence
- Join threat-intel communities and share findings related to AI misuse.
- Collaborate with AI providers to enhance their capabilities in detecting misuse, conducting audits, and responding effectively. Demand transparency on IOCs, not just high-level claims.
