New funding round elevates privacy‑first AI security — €2.1M raised to power encrypted AI workloads

Related

Share

What happened

Research‑driven cybersecurity startup Mirror Security, a spin‑out from University College Dublin (UCD), raised €2.1 million (about USD 2.5 million) in a pre‑seed funding round. The investment was led by Sure Valley Ventures and Atlantic Bridge with support from strategic angel investors. The funding will accelerate development of Mirror Security’s encryption platform for AI workloads and expand its engineering and AI‑security teams in Ireland, the US, and India. 

Mirror Security’s stack, including its AI‑security modules and a Fully Homomorphic Encryption (FHE) engine named “VectaX,” aims to enable AI systems to process sensitive data while keeping it encrypted end-to-end, even during computation.

They also announced a strategic partnership with Inception AI (part of G42) and collaborations with Intel, MongoDB, Qdrant, and other AI‑infrastructure players.

Who is affected

  • Enterprises and governments looking to adopt AI, especially those handling sensitive or proprietary data, stand to benefit, as the technology promises confidentiality during AI training and inference.
  • AI developers and vendors integrating models into their products could leverage FHE-based platforms to offer stronger data guarantees.
  • Existing AI‑adoption projects with regulatory, privacy, or compliance constraints may find a viable path toward secure AI deployment.

Why CISOs should care

  • As organizations increasingly integrate AI into business processes, traditional encryption and data‑protection methods often fall short, especially once data must be processed by AI. Mirror Security’s approach addresses this by enabling data to remain encrypted even during computation, eliminating a major attack surface.
  • Widespread adoption of FHE‑optimized AI security platforms could shift the standards for data protection in AI workloads. Early adoption gives CISOs a strategic advantage.
  • For industries under strict privacy/regulatory requirements, FHE-enabled AI could be a key enabler, letting companies harness AI without compromising data confidentiality or compliance.

3 Practical Actions for CISOs

  1. Assess AI‑data sensitivity across your organization: Map out where AI is or will be used and catalogue which data sets are sensitive or regulated. Use that as a baseline to evaluate if homomorphic encryption makes sense.
  2. Engage with privacy‑preserving AI vendors & pilot encrypted‑AI trials: Reach out to vendors like Mirror Security or others in the FHE/AI‑security space; consider small-scale pilots to test performance, latency, and compatibility with your existing AI workloads or compliance requirements.
  3. Update AI governance and procurement policies: Define security and compliance requirements for AI projects, mandating encryption-in-use where applicable; embed evaluation of encrypted-compute options in vendor procurement or AI‑project planning.