San Francisco: OpenAI, the San Francisco-based artificial intelligence research company, has removed a feature that made shared ChatGPT conversations publicly searchable on Google and other search engines.
The decision came after increasing concerns over user privacy violations, as thousands of AI-generated conversations containing personal and sensitive information began surfacing online.
The now-disabled feature, launched earlier this year, allowed users to share specific ChatGPT chats via unique links, to make insightful AI conversations discoverable by others.
According to OpenAI, the feature was strictly optional and disabled by default. However, the lack of clear user awareness led many to unknowingly make their conversations accessible to the public domain.
We just removed a feature from @ChatGPTapp that allowed users to make their conversations discoverable by search engines, such as Google. This was a short-lived experiment to help people discover useful conversations. This feature required users to opt-in, first by picking a chat… pic.twitter.com/mGI3lF05Ua
— DANΞ (@cryps1s) July 31, 2025
The situation worsened when several users discovered that shared ChatGPT chats were being indexed by Google, Bing, and other search engines. A report by Fast Company revealed that over 4,500 ChatGPT conversations were visible online, some of which included names, locations, emotional confessions, and other personal details that users believed would remain private.
Despite users deleting the shared links or individual chats, many conversations remained available online due to the time it takes for search engines to update their indexes.
In response to the backlash, Dane Stuckey, Chief Information Security Officer at OpenAI, announced the removal of the feature. He explained that the functionality was part of an experimental effort to help users discover meaningful ChatGPT conversations through search, but acknowledged that some people ended up accidentally sharing things they did not intend to reveal to the world.

OpenAI emphasized that the original intention was to foster knowledge sharing, not to compromise user privacy. Still, the unintended consequences prompted a swift reevaluation.
OpenAI CEO Sam Altman addressed the incident, acknowledging the deeply personal nature of how many users, particularly younger individuals, interact with ChatGPT. He noted that people often seek emotional support, life advice, and even confide in the chatbot as they would with a therapist or coach.
Altman further highlighted a critical gap: unlike conversations with licensed professionals, interactions with ChatGPT are not protected under confidentiality laws, making user vulnerability a real concern in the absence of robust data safeguards. The incident has sparked a wider conversation around digital safety, transparency, and AI ethics.