
In a candid podcast interview, OpenAI CEO Sam Altman warned users that conversations with ChatGPT are not legally confidential. During his appearance on comedian Theo Von’s podcast “This Past Weekend,” Altman explained that even highly personal interactions with the AI chatbot are not protected by the same legal privileges that cover discussions with doctors, lawyers, or therapists.

“People talk about the most personal details in their lives to ChatGPT,” Altman said. “Young people especially use it as a therapist, a life coach for relationship problems. And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it. There’s doctor-patient confidentiality, there’s legal confidentiality, whatever. And we haven’t figured that out yet for when you talk to ChatGPT.”
OpenAI CEO Sam Altman has issued a strong warning about user privacy during a recent appearance on comedian Theo Von’s podcast “This Past Weekend.” Altman cautioned that, despite ChatGPT’s increasing use for emotional support and personal advice, conversations with the AI are not protected by any legal confidentiality framework. “If you go talk to ChatGPT about your most sensitive stuff and then there’s a lawsuit or whatever, we could be required to produce that, and I think that’s very screwed up,” he said.
His comments come as generative AI tools like ChatGPT, Google Gemini, and Perplexity AI gain widespread use. Privacy experts and cybersecurity analysts have echoed Altman’s concerns, warning users to avoid sharing confidential or legally sensitive information with AI platforms.
Altman also proposed the idea of introducing “AI privilege,” a legal protection similar to the confidentiality afforded to conversations with therapists, lawyers, and doctors. “I think we should have the same concept of privacy for your conversations with AI that we do with a therapist,” he added, noting that such considerations were barely on the radar just a year ago.
These concerns are no longer hypothetical. OpenAI is currently involved in a legal dispute with The New York Times, which has led to a court order requiring the company to preserve all ChatGPT output data, including content users believe to have been deleted. U.S. Magistrate Judge Ona T. Wang issued the order on May 13, 2025, and it was upheld by District Judge Sidney Stein on June 26. As a result, ChatGPT conversations are now being retained indefinitely and may be subject to legal disclosure.
The ruling affects users on ChatGPT Free, Plus, Pro, and Team plans, while enterprise and educational users are exempt. Altman acknowledged the privacy implications, especially since ChatGPT conversations are not encrypted like messages on secure communication platforms. Under normal conditions, deleted chats would be erased from OpenAI’s servers within 30 days—but that process is currently suspended due to the court’s directive.
Privacy advocates have raised alarms, pointing to OpenAI’s own privacy policy, which allows user data to be shared with third parties, including government agencies, to fulfill legal obligations or prevent harm.
Until stronger legal safeguards are established, users are advised to treat AI conversations with the same caution as any unsecured digital communication. For legal, medical, or mental health concerns, experts continue to urge people to consult licensed professionals who are bound by confidentiality laws.
OpenAI has not yet issued an official response to Altman’s statements, but the debate around AI privacy is expected to intensify as regulators and tech leaders explore new frameworks to protect user data in the age of artificial intelligence.