GenAI-Powered Web Attacks Dynamically Generate Malicious JavaScript in Victims’ Browsers

Related

Cybersecurity Leaders to Watch in Australian Financial Services

Australia’s financial services sector sits at the epicenter of...

Microsoft Develops Scanner to Detect Backdoors in Open-Weight Large Language Models

What happened Microsoft has developed a lightweight scanner designed to...

EDR-Killer Malware Abuse via SonicWall SSLVPN Exploit Chain

What happened Security researchers have detailed a malware campaign in...

Cisco Meeting Management Vulnerability Lets Remote Attackers Upload Arbitrary Files

What happened A high-severity vulnerability in Cisco Meeting Management was...

Share

What happened

Hackers can use GenAI to change loaded clean page into malicious within seconds by embedding hidden prompt instructions in otherwise benign webpages, then requesting code from public APIs of AI services such as Google Gemini and DeepSeek. The report described attackers using prompt-engineering techniques to induce AI systems to generate malicious JavaScript at runtime, which then executes directly in the victim’s browser and turns the page into phishing or credential-stealing content. Because the payload is generated and executed only at runtime, the technique leaves little static evidence on the site itself, and each visit can produce polymorphic variations that evade signature-based detection. The activity was attributed to research and proof-of-concept work by Palo Alto Networks Unit 42, which described how trusted AI service domains can make network-based filtering less effective when malicious code is fetched from reputable endpoints.

Who is affected

Organizations with users browsing compromised or weaponized sites are directly affected through credential theft and session compromise risk. Security teams are indirectly affected because runtime-generated, polymorphic scripts can reduce effectiveness of static scanning and signature-based web security controls.

Why CISOs should care

Runtime, AI-generated payloads shift web threat detection toward behavioral controls and browser execution monitoring. If malicious code originates from trusted AI domains, network allowlists and domain reputation controls can become liabilities, increasing the chance of credential theft and enterprise session hijacking.

3 practical actions

  • Enhance browser runtime protection: Deploy or tune controls that detect suspicious in-browser script behavior, credential harvesting, and dynamic DOM manipulation. 
  • Monitor AI API usage from endpoints: Alert on unusual client-side requests to AI service APIs from general browsing contexts and investigate anomalous prompt-like traffic. 
  • Strengthen anti-phishing controls: Use phishing-resistant authentication and conditional access to reduce impact if browser credentials or sessions are compromised.