Dam Secure raises $4M seed round for AI code security

Related

Pentagon CIO Kirsten Davies Announces New Team Appointments

What happened Pentagon Chief Information Officer Kirsten Davies announced several...

Carnival Corporation Probes Data Breach After Claims of 8.7 Million Records Theft

What happened Carnival Corporation is investigating a potential data breach...

Grinex Exchange Blames Western Intelligence for $13.7M Crypto Hack

What happened Kyrgyzstan-based cryptocurrency exchange Grinex suspended operations on April...

Payouts King Ransomware Uses QEMU VMs to Bypass Endpoint Security

What happened Sophos researchers have documented two active campaigns in...

Share

What happened

Dam Secure raises $4M seed round for AI code security was confirmed on January 20, 2026, when Dam Secure, an AI security startup headquartered in Sydney, Australia and with a presence in San Francisco, closed a $4 million seed funding round led by Paladin Capital Group. The company is developing an AI-native platform that helps organisations proactively manage security risks from AI-generated code entering production, a growing concern as adoption of AI coding tools escalates. The platform enables security requirements to be defined in natural language and automatically enforced across extensive codebases during development, aiming to address logic flaws and vulnerabilities missed by traditional application security scanners. Dam Secure was co-founded by Patrick Collins and Simon Harloff, former executives with experience in secure code and technical architecture. The funding will accelerate product development and go-to-market activities throughout 2026. 

Who is affected

Organisations adopting AI coding tools or generating software via large language models face potential increased exposure to hidden defects in AI-produced code, making Dam Secure’s platform relevant to software development teams and security functions across industries.

Why CISOs should care

As AI-assisted development becomes mainstream, conventional security tools may miss logic-based vulnerabilities in generated code. CISOs should understand how emerging security platforms aim to embed guardrails and enforce policies throughout the development lifecycle, shifting risk management upstream in software delivery.

3 practical actions

  • Evaluate AI-coding risk controls: Review software development toolchains to determine how AI-generated code is assessed and secured during build and test phases.

  • Integrate policy enforcement early: Embed natural language security requirements into CI/CD workflows to automatically enforce secure coding standards.

  • Enhance developer security training: Educate development teams on risks associated with AI-generated code and ensure secure practices are part of AI adoption strategies.