ChatGPT ShadowLeak: The Deep Research flaw in ChatGPT that compromised Gmail data

Last update: 19/09/2025

  • Radware detected a vulnerability in ChatGPT Deep Research that could exfiltrate Gmail account data.
  • The attack used indirect prompt injection with hidden HTML instructions and operated from OpenAI's infrastructure.
  • OpenAI has already mitigated the flaw; there is no public evidence of actual exploitation.
  • It's recommended to review and revoke permissions on Google and limit AI agents' access to emails and documents.

Deep Research flaw in ChatGPT

Recent research has uncovered a security hole in ChatGPT's Deep Research agent that, under certain conditions, could facilitate the output of information from emails hosted in GmailThe discovery highlights the risks of connecting AI assistants to inboxes and other services containing sensitive data.

The cybersecurity firm Radware reported the issue to OpenAI, and the vendor mitigated it in late summer before it became public knowledge.. Although the exploitation scenario was limited and there is no evidence of abuse in the real world, the technique used leaves an important lesson for users and businesses.

What happened to ChatGPT and Gmail data?

ChatGPT Gmail data

Deep Research is a ChatGPT agent oriented to multi-step investigations which can, if the user authorizes it, consult private sources like Gmail to generate reports. The error opened the door for an attacker to prepare a specific message, and the system, when analyzing the inbox, could follow unwanted commands.

The real risk depended on the person requesting ChatGPT to conduct a specific investigation into their email and that the issue matched the content of the malicious email. Still, the vector demonstrates how an AI agent can become the very piece that facilitates data leakage.

Exclusive content - Click Here  Google Maps will scan your screenshots to help you plan trips

Among the potentially affected information could appear names, addresses or other personal data present in the messages processed by the agent. This was not an open access to the account, but rather an exfiltration conditioned by the task assigned to the assistant.

A particularly delicate aspect is that the activity started from the OpenAI cloud infrastructure, which made it difficult for traditional defenses to detect anomalous behavior as it did not originate from the user's device.

ShadowLeak: The Prompt Injection That Made It Possible

ChatGPT Gmail data

Radware dubbed the technique ShadowLeak and frames it in a indirect prompt injection: hidden instructions within the content that the agent analyzes, capable of influencing its behavior without the user noticing.

The attacker sent an email with camouflaged HTML instructions through tricks like tiny fonts or white text on a white background. At first glance The email seemed harmless, but included instructions to search the inbox for specific data..

When the user asked Deep Research to work on his email, the agent read those invisible instructions and proceeded to extract and send data to a website controlled by the attackerIn tests, researchers even went so far as to encode the information in Base64 to appear as a supposed security measure.

Exclusive content - Click Here  Differences between Deep Web and Dark Web: Everything you need to know

Barriers that required explicit consent to open links could also be circumvented by invoking the agent's own navigation tools, which facilitated the exfiltration to external domains under the attacker's control.

In controlled environments, Radware teams noted a very high degree of effectiveness, demonstrating that the combination of mail access and agent autonomy can be persuasive for the model if embedded instructions are not properly filtered.

Why it went unnoticed by the defenses

ChatGPT Gmail data

The communications were originating from trusted servers, so the corporate systems saw legitimate traffic originating from a reputable service. This detail turned the leak into a blind spot for many solutions monitoring.

Furthermore, the victim did not need to click or execute anything specific: he simply asked the agent for a search related to the subject of the email prepared by the attacker, something that makes the maneuver silent and difficult to track.

Researchers emphasize that We are facing a new type of threat in which the AI ​​agent itself acts as a vector. Even with a limited practical impact, the case forces us to review how we grant permissions to automated tools.

Error correction and practical recommendations

Radware

OpenAI implemented mitigations following Radware's notification and expressed its gratitude for the adversarial evidence, stressing that it continually strengthens its safeguards. To date, the provider claims that there is no evidence of exploitation of this vector.

Exclusive content - Click Here  How to Use Copilot Vision on Edge: Features and Tips

Deep Research is an optional agent that can only connect to Gmail with the user's express permission. Before linking inboxes or documents to an assistant, It is advisable to assess the real scope of the permits and limit access to what is strictly necessary..

If you have linked Google services, review and debug access It's simple:

  • Go to myaccount.google.com/security to open the security panel.
  • In the connections section, click on View all connections.
  • Identify ChatGPT or other apps you don't recognize and revoke permissions..
  • Remove unnecessary access and re-grant only the strictly necessary ones. essential.

For users and businesses, It is key to combine common sense and technical measures: keep everything up to date, apply the principle of least privilege to agents and connectors, and monitor the activity of tools with access to sensitive data.

In corporate environments, experts recommend incorporating additional controls for AI agents and, if Deep Research or similar services are used, restrict capabilities such as opening links or sending data to unverified domains.

Radware's research and OpenAI's swift mitigation leave a clear lesson: connecting assistants to Gmail offers advantages, but security demands evaluate permissions, monitor behaviors and assume that instruction injection will continue to test AI agents.

Related article:
How to View Junk Emails in Gmail