State Linked Hackers Target AI Development Platforms in New Abuse Campaign

Related

Depthfirst Secures $40M to Advance AI-Driven Vulnerability Management

What happened Cybersecurity startup Depthfirst has raised $40 million in...

Critical Cal.com Authentication Bypass Lets Attackers Take Over User Accounts

What happened A critical Cal.com authentication bypass lets attackers take...

International Takedown Disrupts RedVDS Cybercrime Platform Driving Phishing and Fraud

What happened International takedown disrupts RedVDS cybercrime platform driving phishing...

Return Fraud Startup Pinch AI Raises $5M to Help Retailers Protect Margins

What happened Return‑fraud detection startup Pinch AI has secured $5...

Share

What happened

Factory, a San Francisco based AI development platform, reported that it disrupted a campaign run by a state linked threat group. The attackers tried to hijack Factory’s development environment and AI coding tools so they could use them inside a larger global cyber fraud network. The group relied on AI based coding agents to manage infrastructure, move across multiple AI products, and avoid detection.

Who is affected

The direct target was Factory, but the incident affects any organization that uses AI development platforms or AI powered tools. The attackers took advantage of common onboarding paths and free tier access that many AI providers offer. Companies that use AI tools for development, automation, or operations should see this as relevant.

Why CISOs should care

This shows that AI development platforms are now valuable attack surfaces. Threat actors can abuse AI tools to scale criminal operations. It also signals a growing trend where attackers combine AI driven automation with traditional cyber crime. As more organizations adopt AI tools, security teams need to treat these platforms as part of the core attack surface rather than add ons.

3 practical actions

  1. Audit all AI platforms used across the company and identify any that rely on free or trial access.

  2. Tighten onboarding and access controls for AI tools. Use least privilege and monitor all activity.

  3. Add AI platforms to threat models, vendor reviews, and risk assessments to account for provider level compromise or abuse.