What happened
Security researchers at Truffle Security discovered that nearly 3,000 publicly exposed Google Cloud API keys, previously considered low-risk identifiers, can now be abused to authenticate to sensitive Gemini (Generative Language) API endpoints when the AI service is enabled in the same Google Cloud project.
Who is affected
Organizations with Google Cloud projects that have the Gemini (Generative Language) API enabled and that have API keys embedded in client-side code, public repositories, or publicly accessible sites are at risk, including financial firms, tech companies, recruitment platforms, and even Google’s own public websites.
Why CISOs should care
This issue elevates what were previously benign billing identifiers into credentials capable of accessing sensitive AI endpoints, exposing private files, cached data, and allowing unauthorised API calls that can result in significant financial charges and data exposure. The vulnerability arises without warning or notification when Gemini is enabled, meaning teams may be unaware that their public keys have gained broader access privileges.
3 Practical Actions
- Audit API keys and project settings: Inventory all Google Cloud API keys and determine if any are publicly visible or embedded in code; check which projects have the Generative Language API enabled.
- Restrict and rotate keys: Apply restrictive API scopes (least privilege) and immediately rotate or revoke any exposed keys, especially those with unintended AI access.
- Implement scanning and monitoring: Use tools to detect exposed secrets in code and repositories (e.g., TruffleHog), enable billing alerts, and monitor for unexpected AI API usage.
