What happened
Google has reported that threat actors are leveraging its Gemini AI models to assist in multiple stages of cyberattacks, from planning to execution. According to the report, Google’s Threat Analysis Group (TAG) observed that attackers are using Gemini capabilities to refine phishing content, generate malicious scripts, and craft social engineering lures that adapt to target contexts. The abuse also extends to automating tasks traditionally performed manually, such as reconnaissance query generation, malware customization, and post-exploitation analysis. Google characterized this activity as a shift in tactics, with adversaries using AI tools to streamline workflows and reduce the skill barriers required for complex operations. While Google did not attribute the activity to specific named groups in the disclosed advisory, the company emphasized that Gemini’s powerful generative and reasoning features can be misused across the attack lifecycle when accessed by malicious users.
Who is affected
Organizations and individuals targeted by AI-assisted cyber campaigns are affected, as attackers using AI-generated lures, code, and reconnaissance can escalate the scale and effectiveness of their operations against victims.
Why CISOs should care
The reported abuse of AI models like Gemini across attack stages underscores how generative tools are being co-opted to enhance adversary capabilities, increasing the pace and sophistication of common threats such as phishing, malware development, and automated reconnaissance.
3 practical actions
- Monitor for AI-generated content patterns. Detect phishing and social engineering campaigns that exhibit stylistic or structural traits of AI generation.
- Update threat detection models. Incorporate signals that identify automation and AI-assisted malicious behaviors.
- Educate users on evolving phishing tactics. Train stakeholders on recognizing increasingly sophisticated, contextually tailored lures.
