Acting CISA Director Uploaded Sensitive Contracting Documents to Public ChatGPT

Related

CISOs to Watch in France’s Research Industry

France’s research ecosystem spans public institutes, life sciences, and...

CISOs to Watch in France’s Government Administration

Cybersecurity within France’s government administration operates under unique pressures....

Share

What happened

The acting CISA director uploaded sensitive contracting documents to the public version of ChatGPT in early August 2025, triggering multiple automated data-loss prevention alerts inside federal networks. According to four Department of Homeland Security (DHS) officials, Madhu Gottumukkala, CISA’s interim head since May 2025, uploaded files marked “for official use only” (FOUO) after receiving special permission from the agency’s Chief Information Officer to access the AI tool. At the time, ChatGPT remained blocked for other DHS personnel. The uploads were repeatedly flagged by cybersecurity sensors during the first week, prompting senior DHS officials to initiate an internal review to assess potential national security impact. While none of the documents were classified, they contained sensitive contracting information not intended for public release. Gottumukkala later discussed the matter with DHS leadership, CISA’s CIO, and legal counsel as part of the review process.

Who is affected

CISA, DHS, and federal agencies handling sensitive but unclassified information are directly affected. The exposure was internal but involved uploading government documents to a public AI platform outside federal data-handling environments.

Why CISOs should care

This incident shows how approved exceptions for AI tool usage can still lead to sensitive data leaving controlled environments, even when classification thresholds are not crossed. Uploading FOUO material into public AI services introduces governance, oversight, and audit challenges that traditional security controls may detect but not fully prevent.

3 practical actions:

  • Clarify AI exception boundaries: Define precise limits on what data types may be used with public AI tools, even under approved exceptions.

  • Review DLP alert response workflows: Ensure automated data-exfiltration alerts tied to AI platforms trigger timely escalation and documented review.

  • Reinforce sensitive data handling training: Reiterate handling requirements for FOUO and similar materials, especially for executives granted expanded tool access.