What happened
Hackers target misconfigured proxies to access paid LLM services in a widespread scanning campaign that began in late December 2025, with threat monitoring platform GreyNoise documenting over 80,000 probing sessions against more than 73 AI model endpoints. Attackers identified publicly reachable or improperly configured proxy servers and used them to test connectivity to commercial large language model (LLM) APIs without triggering typical security alerts. The targeted LLMs included OpenAI (GPT-4o and variants), Anthropic (Claude Sonnet, Opus, Haiku), Meta (Llama 3.x), Google (Gemini), DeepSeek (DeepSeek-R1), Mistral, Alibaba (Qwen), and xAI (Grok). The campaign utilized server-side request forgery (SSRF) techniques and low-noise probing to enumerate accessible AI model endpoints, suggesting organized reconnaissance rather than confirmed post-discovery exploitation. GreyNoise researchers reported that the activity did not include observed data theft or model abuse but still represents unauthorized access to paid services.Â
Who is affected
Organizations and service operators that have exposed or misconfigured proxy servers potentially face indirect exposure of their infrastructure to unauthorized access of paid LLM APIs, although no confirmed exploitation of core AI platforms has been observed.Â
Why CISOs should care
This incident highlights the operational risk of misconfigured network infrastructure enabling unauthorized access to valuable third-party resources, leading to potential financial liability, misuse of services, and increased attack surface visibility for adversaries.Â
3 practical actions
- Audit proxy configurations: Review and secure all proxy servers to enforce strict authentication and access controls.
- Implement network filtering: Apply egress filters and rate limiting to detect and block unauthorized outbound traffic.
- Monitor API usage: Track anomalous access patterns to paid LLM endpoints to detect misuse.
