Helmet Security Raises $9M to Shield AI-to-AI Communications

Related

Share

What happened

Helmet Security emerged from stealth mode and announced a $9 million funding round led by SYN Ventures and WhiteRabbit Ventures. The startup, co-founded by Fred Kneip (CEO) and Kaushik Shanadi (CTO), is launching a platform to secure communications between AI agents, software, and data. It focuses on the underlying layer driving these interactions: the Model Context Protocol (MCP). 

Who is affected

Organizations deploying AI-driven systems, especially those using agentic AI or AI tools that interface directly with internal applications or sensitive data, are the primary audience. Enterprises using MCP-enabled connections are at risk if those connections remain unmonitored; Helmet Security’s solution targets exactly that gap.

Why CISOs should care

  • As AI adoption accelerates, connections between AI agents and enterprise systems create new attack surfaces that standard security tooling may not monitor. Helmet’s platform offers visibility into MCP-based connections, logging traffic, and enforcing policy compliance, closing a blind spot in traditional security architecture.
    The risk of unmonitored AI-to-AI or AI-to-software communication isn’t theoretical: uncontrolled agentic communications could lead to unauthorized data access, data leakage, or misuse of systems. Helmet’s approach helps prevent such risks while retaining agility for developers.
  • The platform is designed to integrate with existing endpoint detection and response (EDR) tooling, which means CISOs don’t necessarily need a full overhaul, but can layer this protection over what’s already in place.

3 Practical Actions for CISOs

  1. Inventory AI connectivity: Assess which AI tools in your environment rely on MCP or similar protocols to connect with internal applications or data. Map out all agentic connections, even those introduced by development or automation teams.
  2. Adopt specialized monitoring: Consider deploying a solution to continuously monitor, log, and enforce security controls on AI-to-system communications, rather than relying solely on standard network or EDR tools.
  3. Establish AI communication policies: Define and enforce policies for how AI agents may interact with corporate systems and data. That includes access control, monitoring, and alerting for anomalous or unauthorized agentic behaviors.