ESET Warns of Evolving Fraud: Nomani Scam Using AI Deepfakes Surges 62% on Social Platforms

Related

Share

What Happened

The Nomani fraudulent investment scheme increased by 62% in 2025, according to ESET threat intelligence data. Originally spreading primarily through Facebook, the scam’s malvertising campaigns have now expanded to include YouTube and other social media platforms, leveraging AI‑generated deepfake ads and videos to deceive users into believing in fake investment returns. ESET blocked more than 64,000 unique URLs associated with the scam this year. When victims request payouts, they are typically asked for additional fees or sensitive personal information, leading to financial loss rather than the promised returns. 

Who Is Affected

  • Social media users worldwide, especially in Czechia, Japan, Slovakia, Spain, and Poland, where many of the scam URLs were detected. 
  • Potential investors targeted by AI‑enhanced ads that impersonate legitimate financial opportunities. 
  • Platforms serving ads and digital marketers, given how scam actors exploit legitimate ad tools to evade automated detection. 

Why CISOs Should Care

  • AI‑enhanced social engineering represents a significant uplift in fraud sophistication, making traditional detection methods less effective. 
  • The scam illustrates how deepfakes and generative content can be weaponized at scale, increasing the risk of financial loss and reputational damage for organizations with users on social channels. 
  • CISOs must consider ad ecosystem abuse and third‑party platform risks as part of enterprise threat models, especially where brand impersonation or employee targeting could lead to broader compromise. 

3 Practical Actions

  1. Educate employees and users about AI‑driven scam indicators, such as unrealistic “high‑return” promises, deepfake anomalies, and requests for upfront fees or personal data.
  2. Monitor paid digital campaigns associated with your brand, and work with platform partners to flag and takedown fraudulent ads quickly.
  3. Enhance threat intelligence feeds with signals related to social media fraud and deepfake content to improve early detection and prevention capabilities.