GenAI-Powered Web Attacks Dynamically Generate Malicious JavaScript in Victims’ Browsers

Related

Iowa’s Cybersecurity Leadership Spotlight

Iowa’s cybersecurity leadership spans enterprise software, financial services, state...

Wyoming’s Cybersecurity Leadership Spotlight

Wyoming’s cybersecurity leadership spans higher education, state government, community...

West Virginia’s Cybersecurity Leadership Spotlight

West Virginia’s cybersecurity leadership spans state government, higher education,...

South Dakota’s Cybersecurity Leadership Spotlight

South Dakota’s cybersecurity leadership spans banking, higher education, consulting,...

Share

What happened

Hackers can use GenAI to change loaded clean page into malicious within seconds by embedding hidden prompt instructions in otherwise benign webpages, then requesting code from public APIs of AI services such as Google Gemini and DeepSeek. The report described attackers using prompt-engineering techniques to induce AI systems to generate malicious JavaScript at runtime, which then executes directly in the victim’s browser and turns the page into phishing or credential-stealing content. Because the payload is generated and executed only at runtime, the technique leaves little static evidence on the site itself, and each visit can produce polymorphic variations that evade signature-based detection. The activity was attributed to research and proof-of-concept work by Palo Alto Networks Unit 42, which described how trusted AI service domains can make network-based filtering less effective when malicious code is fetched from reputable endpoints.

Who is affected

Organizations with users browsing compromised or weaponized sites are directly affected through credential theft and session compromise risk. Security teams are indirectly affected because runtime-generated, polymorphic scripts can reduce effectiveness of static scanning and signature-based web security controls.

Why CISOs should care

Runtime, AI-generated payloads shift web threat detection toward behavioral controls and browser execution monitoring. If malicious code originates from trusted AI domains, network allowlists and domain reputation controls can become liabilities, increasing the chance of credential theft and enterprise session hijacking.

3 practical actions

  • Enhance browser runtime protection: Deploy or tune controls that detect suspicious in-browser script behavior, credential harvesting, and dynamic DOM manipulation. 
  • Monitor AI API usage from endpoints: Alert on unusual client-side requests to AI service APIs from general browsing contexts and investigate anomalous prompt-like traffic. 
  • Strengthen anti-phishing controls: Use phishing-resistant authentication and conditional access to reduce impact if browser credentials or sessions are compromised.