Thursday, August 7, 2025

A Single Poisoned Doc May Leak ‘Secret’ Knowledge By way of ChatGPT

The newest generative AI fashions usually are not simply stand-alone text-generating chatbots—as an alternative, they will simply be hooked as much as your knowledge to provide customized solutions to your questions. OpenAI’s ChatGPT could be linked to your Gmail inbox, allowed to examine your GitHub code, or discover appointments in your Microsoft calendar. However these connections have the potential to be abused—and researchers have proven it might probably take only a single “poisoned” doc to take action.

New findings from safety researchers Michael Bargury and Tamir Ishay Sharbat, revealed on the Black Hat hacker convention in Las Vegas at this time, present how a weak spot in OpenAI’s Connectors allowed delicate info to be extracted from a Google Drive account utilizing an oblique immediate injection assault. In an illustration of the assault, dubbed AgentFlayerBargury reveals the way it was doable to extract developer secrets and techniques, within the type of API keys, that had been saved in an illustration Drive account.

The vulnerability highlights how connecting AI fashions to exterior programs and sharing extra knowledge throughout them will increase the potential assault floor for malicious hackers and probably multiplies the methods the place vulnerabilities could also be launched.

“There’s nothing the person must do to be compromised, and there’s nothing the person must do for the information to exit,” Bargury, the CTO at safety agency Zenity, tells WIRED. “We’ve proven that is utterly zero-click; we simply want your electronic mail, we share the doc with you, and that’s it. So sure, that is very, very unhealthy,” Bargury says.

OpenAI didn’t instantly reply to WIRED’s request for remark in regards to the vulnerability in Connectors. The corporate launched Connectors for ChatGPT as a beta function earlier this yr, and its web site lists at the least 17 totally different companies that may be linked up with its accounts. It says the system permits you to “carry your instruments and knowledge into ChatGPT” and “search recordsdata, pull reside knowledge, and reference content material proper within the chat.”

Bargury says he reported the findings to OpenAI earlier this yr and that the corporate shortly launched mitigations to stop the method he used to extract knowledge by way of Connectors. The way in which the assault works means solely a restricted quantity of information may very well be extracted directly—full paperwork couldn’t be eliminated as a part of the assault.

“Whereas this concern isn’t particular to Google, it illustrates why growing sturdy protections in opposition to immediate injection assaults is essential,” says Andy Wen, senior director of safety product administration at Google Workspace, pointing to the corporate’s not too long ago enhanced AI safety measures.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles