21,000+ OpenClaw AI Instances With Personal Configurations Exposed Online

Related

Apache Syncope Vulnerability Lets Attackers Hijack User Sessions

What happened A critical XML External Entity (XXE) vulnerability in...

Malicious App on Google Play With 50K+ Downloads Steals Credentials and Pushes Ads

What happened Zscaler ThreatLabz identified a malicious Android app hosted...

Hikvision Wireless Access Points Vulnerability Enables Malicious Command Execution

What happened A high-severity vulnerability in Hikvision wireless access points...

OpenClaw AI Agent Skills Abused to Conduct Credential Stuffing and Profile Hijacking

What happened Security researchers at VirusTotal have identified malicious use...

Share

What happened

More than 21,000 publicly exposed OpenClaw AI instances were discovered online, potentially exposing personal configurations and sensitive data without authentication controls. According to the report, the open-source personal AI assistant known as OpenClaw—which underwent multiple rebrands from Clawdbot to Moltbot before adopting its current name—has experienced rapid growth, with deployments expanding from approximately 1,000 installations to over 21,000 in less than a week. Censys analysis revealed these instances were found inadvertently accessible on the public internet, meaning that sensitive configuration files, including API credentials and personal settings associated with individual users, may be retrievable by anyone without requiring login credentials. The exposed instances reflect misconfigurations or defaults that bind the AI assistant to network interfaces accessible from outside local environments. The rapid proliferation of OpenClaw has outpaced secure configuration practices, posing risks tied to unprotected AI automation systems. 

Who is affected

Owners and operators of publicly exposed OpenClaw AI assistant instances are directly affected, as their personal configurations and potentially sensitive data are publicly accessible due to misconfiguration. 

Why CISOs should care

The exposure highlights how misconfigured AI-powered personal assistant infrastructure can inadvertently expose sensitive credentials and user configuration data to unauthorized actors, increasing identity and supply chain security risk in AI-enabled environments. 

3 practical actions

  • Identify and secure exposed instances. Conduct asset inventory scans to find OpenClaw instances reachable from the internet and restrict access to trusted networks. 
  • Harden authentication and configuration defaults. Apply authentication controls and network restrictions to OpenClaw instances to prevent unauthenticated access. 
  • Rotate exposed credentials. Invalidate and renew any API keys or tokens found within publicly accessible configurations.Â