What happened
Resemble AI announced that it has raised $13 million in a strategic investment round, bringing its total funding to $25 million.
The startup, founded in 2019 and based in California, delivers an AI‑powered detection platform that identifies deepfakes and other AI‑generated threats across audio, video, images, and text, in real time and supporting dozens of languages.
Their core tool, DETECT-3B Omni, claims to spot AI‑based content (voice cloning, synthetic video, manipulated images/text) to help prevent fraud, vishing, social‑engineering, and voice-identity attacks.
Complementing this, they offer Resemble Intelligence, a multimodal analysis platform that adds explainability to help organizations understand why certain content is flagged as suspicious.
The new funding will be used to accelerate development and support global expansion.
Who is affected
- Enterprises, including large corporations, global telecommunication firms, and organizations that rely on multimedia communications. Resemble already serves Fortune 500 customers and government agencies.
- CISOs and security teams responsible for protecting identity, brand, and user trust, especially where voice, video, or AI‑generated content is part of operations.
- Industries vulnerable to deepfake‑based social engineering, voice fraud, or synthetic media tampering.
Why CISOs should care
- Generative AI has made deepfakes and synthetically generated malicious content far more accessible and effective, increasing risk across security, identity, fraud, and brand‑protection domains. The threat is no longer hypothetical.
- Traditional security controls (designed for known threats and human-generated content) may not catch AI‑driven manipulations. A multimodal detection platform like DETECT‑3B Omni can offer real‑time defense covering audio, video, images, and text, closing a crucial gap.
- Early adoption of dedicated AI‑threat detection reduces risk exposure. As AI‑driven fraud grows, organizations that embed detection now may avoid costly brand, financial, and reputational damage.
3 Practical Actions for CISOs
- Evaluate deepfake detection tools: Short‑list and test platforms like Resemble AI’s DETECT‑3B Omni and Resemble Intelligence to gauge how well they detect AI‑generated media relevant to your environment (calls, video, images, messaging).
- Integrate multimodal threat detection into identity and fraud prevention workflows: Especially for high-risk operations like customer support, financial transactions, executive approvals over voice/video, or any communication channels that can be spoofed via AI.
- Update incident‑response and risk models to account for AI‑generated threats: Extend threat models and playbooks beyond traditional phishing/malware to include synthetic media, impersonation, deepfake‑based social‑engineering; ensure logging, alerting, and response mechanisms are ready.
