How extremely personal ChatGPT conversations were ending up on Google

How extremely personal ChatGPT conversations were ending up on Google

A researcher was able to uncover over 100,000 sensitive ChatGPT conversations that were searchable on Google thanks to a ‘short-lived experiment’ by OpenAI.

Henk Van Ess was one of the first to figure out that anyone could search for these chats using key certain key words.

He discovered people had been discussing everything from non-disclosure agreements, confidential contracts, relationship problems, insider trading schemes and how to cheat on papers.

This unforeseen problem arose because of the share feature, which if clicked by the user would create a predictably formatted link using words from the chat.

This allowed people to search for the conversations by typing in ‘site:chatgpt.com/share’ and then putting key words at the end of the query.

Van Ess said one chat he discovered detailed cyberattacks targeting named targets within Hamas, the terrorist group controlling Gaza that Israel has been at war with since October 2023.

Another involved a domestic violence victim talking about possible escape plans while revealing their financial shortcomings. 

The share feature was an attempt by ChatGPT to make it easier for people to show others their chats, though most users likely didn’t realize just how visible their musings would be.

OpenAI has acknowledged that the way ChatGPT was previously set up allowed more than 100,000 conversations to be freely searched on Google

In a statement to 404Media, OpenAI did not dispute that there were more than 100,000 chats that had been searchable on Google.

‘We just removed a feature from [ChatGPT] that allowed users to make their conversations discoverable by search engines, such as Google. This was a short-lived experiment to help people discover useful conversations,’ said Dane Stuckey, OpenAI chief information security officer.

‘This feature required users to opt-in, first by picking a chat to share, then by clicking a checkbox for it to be shared with search engines,’ Stuckey added.

Now, when a user shares their conversation, ChatGPT creates a randomized link that uses no key words.

‘Ultimately we think this feature introduced too many opportunities for folks to accidentally share things they didn’t intend to, so we’re removing the option,’ Stuckey said.

‘We’re also working to remove indexed content from the relevant search engines. This change is rolling out to all users through tomorrow morning. Security and privacy are paramount for us, and we’ll keep working to maximally reflect that in our products and features,’ he added.

Researcher Henk Van Ess plus many others have already archived many of the conversations that were exposed

Researcher Henk Van Ess plus many others have already archived many of the conversations that were exposed

However, much of the damage has already been done, since many of the conversations were already archived by Van Ess and others.

For example, a chat that’s still viewable involves a plan to create a new bitcoin called Obelisk.

Ironically, Van Ess used another AI model, Claude, to come up with key words to use to dredge up the most juicy chats.

To find people discussing criminal conspiracies, Claude suggested searching ‘without getting caught’, ‘avoid detection’, ‘without permission’ or ‘get away with.’

But the words that exposed the most intimate confessions were ‘my salary’, ‘my SSN’, ‘diagnosed with’, or ‘my therapist.’

0 Shares:
Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like