Custom Font Rendering Attack Can Poison AI Systems and Deliver Hidden Instructions

Related

CISOs to Watch in Georgia’s Financial Services Sector

Georgia’s financial services sector includes banks, wealth management firms,...

ScreenConnect Vulnerability Exposes Machine Keys, Enables Session Hijacking

What happened ConnectWise disclosed a critical vulnerability in its ScreenConnect...

RondoDox Botnet Targets 174 Vulnerabilities Across Devices and Platforms

What happened Researchers at Bitsight identified a large-scale campaign involving...

11 Cybersecurity Vendors CISOs Must Check Out at RSA Conference 2026

Cybersecurity has shifted from reactive defense to continuous, intelligence-driven...

Share

What happened

Researchers demonstrated a novel attack technique that uses custom fonts and CSS to poison how AI systems interpret web content, exploiting the gap between what users see and what AI models like ChatGPT, Claude, and Gemini read from underlying HTML. The technique allows attackers to present harmless-looking content to users while embedding hidden malicious instructions that are only visible to AI systems. A webpage disguised as a video game fanfiction site displayed normal content to users, while a specially crafted font rendered a separate payload instructing actions such as executing a reverse shell. The attack works because AI assistants process raw HTML text, while browsers render content through visual layers like fonts and styling, creating a mismatch attackers can exploit. 

Who is affected

AI-powered assistants and tools that analyze web content, including enterprise AI systems and browser-integrated assistants, are affected when they process manipulated webpages containing hidden instructions. 

Why CISOs should care

The technique highlights a structural weakness in how AI systems interpret content, where differences between rendered visuals and underlying code can be exploited to inject instructions without user visibility. 

3 practical actions

  1. Validate AI input sources. Ensure AI systems do not blindly trust rendered webpage content without inspecting underlying HTML structures. 
  2. Restrict automated AI actions. Limit the ability of AI assistants to execute sensitive operations based on external content. 
  3. Monitor for prompt injection patterns. Detect hidden or encoded instructions embedded in web content processed by AI systems. 

For more coverage of artificial intelligence security risks and emerging threats, explore our reporting under the AI tag.