Microsoft and Salesforce Patch AI Agent Flaws That Could Leak Sensitive Data

Related

Cybersecurity Leaders to Watch: Louisiana Healthcare

Louisiana’s healthcare sector depends on cybersecurity leaders who can...

Anthropic Unveils Claude Mythos to Find Critical Software Flaws Before Attackers Do

What happened Anthropic unveiled Claude Mythos Preview as the model...

Microsoft Commits $10 Billion to Expand AI and Cybersecurity Infrastructure in Japan

What happened Microsoft announced a $10 billion investment to expand...

Share

What happened

Microsoft and Salesforce have patched recently disclosed AI agent vulnerabilities that could have allowed external attackers to leak sensitive data through prompt injection. One issue affected Salesforce Agentforce and involved a public lead capture form that accepted arbitrary text from unauthenticated users. Researchers showed that malicious instructions placed in the form could be treated by the agent as trusted prompts and used to return CRM lead data through email. A separate flaw in Microsoft Copilot, tracked as CVE-2026-21520 and rated 7.5, involved a SharePoint form input that could be abused to trigger connected Copilot actions and send customer data to an attacker-controlled email address. Both issues have now been addressed.

Who is affected

The direct exposure affects organizations using Salesforce Agentforce and Microsoft Copilot in workflows where AI agents process untrusted form input and can access sensitive internal data or communicate externally. The risk is greatest in environments where agents are connected to CRM records, SharePoint content, or email actions that can move data outside the organization.

Why CISOs should care

This matters because the flaws did not require traditional software exploitation or privileged access. They relied on prompt injection through customer-facing or externally influenced inputs. The incidents also reinforce a broader issue for AI deployments: when an agent can read untrusted content, access sensitive data, and send information outward, prompt injection can turn ordinary business workflows into data exfiltration paths.

3 practical actions

  1. Treat external form input as untrusted data: Ensure AI agents do not process customer-submitted or public-facing form content as trusted instructions.
  2. Restrict outward data actions: Limit or review agent abilities to send emails or transfer data externally when they are acting on untrusted inputs.
  3. Add oversight to sensitive agent workflows: Require manual review or stronger controls for AI actions involving CRM records, SharePoint data, or other sensitive business information before information is sent outside the organization.

For more news about security flaws affecting enterprise systems and data protection, click Vulnerability to read more.