Threat Actors Use AI to Automate Zero-Day Discovery and Exploitation at Machine Speed

Related

Pentagon Grapples With Securing AI as It Moves Toward Autonomous Warfare

What happened Senior US military leaders are publicly wrestling with...

Microsoft Commits $10 Billion to Expand AI and Cybersecurity Infrastructure in Japan

What happened Microsoft announced a $10 billion investment to expand...

Cybersecurity Leaders to Watch in California’s Artificial Intelligence Industry

California’s artificial intelligence industry includes security leaders working across...

Share

What happened

Cyberthint analysts have documented a structural shift in how cyberattacks are conducted, with threat actors now using artificial intelligence to discover and exploit zero-day vulnerabilities in minutes rather than months. The firm identified this transition in late 2024, noting that AI is operating not just as a research assistant but as an active attacker capable of scanning networks, identifying weaknesses, attempting exploits, and switching tactics autonomously when one approach fails.

The most concrete case study is GAMECHANGE, identified in mid-September 2024 and assessed with high confidence as a Chinese state-backed operation. GAMECHANGE targeted roughly 70 global entities including technology companies, financial institutions, and government agencies, successfully compromising four. The malware was written in Python, compiled into a Windows executable, and delivered from compromised email accounts impersonating Ukrainian ministry representatives. Its distinguishing characteristic was that its instructions were not hardcoded. Instead, it sent real-time queries to Alibaba’s Qwen-Coder model via the Hugging Face API, generating commands dynamically. It embedded unique API tokens to resist blacklisting and collected hardware, process, network, and Active Directory data while recursively copying Office documents and PDFs. MITRE’s analysis described GAMECHANGE as a pilot program testing LLM capabilities before broader deployment.

Two additional AI-powered malware families were also documented. MalTerminal, presented by SentinelLABS at LABScon 2024, generates malicious payloads at runtime by querying a GPT-4 endpoint, producing encryption and exfiltration code entirely in memory without writing to disk. JSOUTFMUT, a VBScript dropper discovered in June 2024, queries the Gemini Flash API hourly for new obfuscation techniques, generating a fresh variant every hour while spreading through removable drives and network shares. In February 2025, MITRE expanded its ATT&CK framework to cover AI-orchestrated operations, confirming the threat category has matured into a recognized industry concern.

Who is affected

Technology companies, financial institutions, and government agencies are the confirmed targeting categories in the GAMECHANGE campaign. The broader implication extends to any organization that assumes attack timelines allow time for reactive patching, since AI-driven exploitation can compress the window between vulnerability discovery and active use to minutes.

Why CISOs should care

The GAMECHANGE campaign demonstrates that AI-orchestrated attacks are not theoretical. They have been deployed at scale by a state-backed actor, tested as a pilot for broader use, and documented well enough that MITRE has updated its threat framework to account for them. The combination of real-time LLM command generation, unique API token embedding to resist blacklisting, and runtime payload generation without writing to disk creates a threat profile that defeats several categories of traditional detection.

The most operationally significant shift is the one Cyberthint identifies directly: Mean Time to Contain now matters more than Mean Time to Detect. When attack speed outpaces patching cycles, the outcome is determined by how quickly a breach is contained, not how quickly it is found.

3 practical actions

  1. Add AI API traffic to your network monitoring scope and scan binaries for embedded LLM prompt structures: GAMECHANGE embedded API tokens and sent queries to external LLM endpoints as its command-and-control mechanism. Monitoring for unexpected outbound connections to AI API endpoints and using YARA rules to detect embedded JSON prompt structures in binaries are among the most effective detection methods for this malware class.
  2. Shift detection engineering toward anomaly-based signals rather than static IOCs: AI-generated malware that mutates hourly and generates payloads in memory renders signature-based detection unreliable. Prioritize behavioral anomaly detection including unexpected SMB admin share usage, high-entropy DNS queries, and unusual Active Directory enumeration patterns that persist regardless of how the malware presents itself.
  3. Prioritize containment speed over detection speed in your incident response framework: The structural advantage of AI-driven attacks is velocity. Response programs that are optimized around detection and investigation timelines need to be re-evaluated for containment capability, including automated isolation triggers, network segmentation enforcement, and pre-authorized response actions that do not require manual approval chains to execute.
e1057c44fd23a2339dd83fc7bd88822e97b8b3544e012414c207939b16e0441d?s=150&d=mp&r=g
+ posts