Neural Concept Raises $100M to Accelerate AI-Driven Engineering

Related

The CISOs Securing Critical Infrastructure in 2025

In an era where energy grids, transportation systems, water...

Embedded Security Surges: Exein Secures €100M to Accelerate Global Embedded Cybersecurity

What happened Italian embedded cybersecurity firm Exein announced it has...

Neural Concept Raises $100M to Accelerate AI-Driven Engineering

What happened Swiss AI engineering software provider Neural Concept closed...

Share

What happened

Swiss AI engineering software provider Neural Concept closed a $100 million Series C funding round led by Growth Equity at Goldman Sachs Alternatives, with participation from existing backers including Forestay Capital, Alven, HTGF, D.E. Shaw Ventures, and Aster Capital. The capital will be used to scale its AI-native engineering platform, expand globally, and further integrate advanced AI capabilities into CAD and simulation workflows.

Who is affected

Large industrial engineering teams across sectors such as automotive, aerospace, energy, semiconductors, and consumer electronics are the primary users of Neural Concept’s platform, which is designed to speed design cycles and optimize product performance by embedding AI directly into product development workflows.

Why CISOs should care

Neural Concept’s expanded use of enterprise AI in core engineering systems highlights growing AI integration into mission-critical workflows that handle intellectual property and highly sensitive design data. As these platforms scale, they will create new threat surfaces and data governance challenges, especially around secure AI model access, supply chain risk, and third-party integration with partners like Nvidia, Siemens, Ansys, Microsoft, and AWS.

3 practical actions for CISOs

  1. Inventory and classify engineering AI assets: Know where AI tools like Neural Concept are being used, what data they access, and who has access.
  2. Assess third-party AI risks: Evaluate security controls and contractual obligations of AI platform vendors and their partners to ensure strong data protection and incident response alignment.
  3. Integrate AI platform oversight into governance: Extend existing cybersecurity policies to cover AI model lifecycle management, secure APIs, and monitoring for anomalous use patterns that could indicate compromise or data exfiltration.