Neural Concept Raises $100M to Accelerate AI-Driven Engineering

Related

In Praise of CISA

Lately, the Cybersecurity and Infrastructure Security Agency (CISA) has...

Cybersecurity Leaders to Watch: Louisiana Healthcare

Louisiana’s healthcare sector depends on cybersecurity leaders who can...

Anthropic Unveils Claude Mythos to Find Critical Software Flaws Before Attackers Do

What happened Anthropic unveiled Claude Mythos Preview as the model...

Microsoft Commits $10 Billion to Expand AI and Cybersecurity Infrastructure in Japan

What happened Microsoft announced a $10 billion investment to expand...

Share

What happened

Swiss AI engineering software provider Neural Concept closed a $100 million Series C funding round led by Growth Equity at Goldman Sachs Alternatives, with participation from existing backers including Forestay Capital, Alven, HTGF, D.E. Shaw Ventures, and Aster Capital. The capital will be used to scale its AI-native engineering platform, expand globally, and further integrate advanced AI capabilities into CAD and simulation workflows.

Who is affected

Large industrial engineering teams across sectors such as automotive, aerospace, energy, semiconductors, and consumer electronics are the primary users of Neural Concept’s platform, which is designed to speed design cycles and optimize product performance by embedding AI directly into product development workflows.

Why CISOs should care

Neural Concept’s expanded use of enterprise AI in core engineering systems highlights growing AI integration into mission-critical workflows that handle intellectual property and highly sensitive design data. As these platforms scale, they will create new threat surfaces and data governance challenges, especially around secure AI model access, supply chain risk, and third-party integration with partners like Nvidia, Siemens, Ansys, Microsoft, and AWS.

3 practical actions for CISOs

  1. Inventory and classify engineering AI assets: Know where AI tools like Neural Concept are being used, what data they access, and who has access.
  2. Assess third-party AI risks: Evaluate security controls and contractual obligations of AI platform vendors and their partners to ensure strong data protection and incident response alignment.
  3. Integrate AI platform oversight into governance: Extend existing cybersecurity policies to cover AI model lifecycle management, secure APIs, and monitoring for anomalous use patterns that could indicate compromise or data exfiltration.