Sam Altman calls for ‘AI privilege’ as OpenAI clarifies court order to retain temporary and deleted ChatGPT sessions

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more
Regular ChatGPT users (among whom include the author of this article) may or may not have noticed that the hit chatbot from OpenAI allows users to enter into a “temporary chat” that is designed to wipe all the information exchanged between the user and the underlying AI model as soon as the chat session is closed by the user. In addition, OpenAI also allows users to manually delete prior ChatGPT sessions from the sidebar on the web and desktop/mobile apps by left-clicking or control-clicking, or holding down/long pressing on them from the selector.
However, this week, OpenAI found itself facing criticism from some of said ChatGPT users after they discovered that the company has not actually been deleting these chat logs as previously indicated.
As AI influencer and software engineer Simon Willison wrote on his personal blog: “Paying customers of [OpenAI’s] APIs may well make the decision to switch to other providers who can offer retention policies that aren’t subverted by this court order!”
“You’re telling me my deleted chatgpt chats are actually not deleted and is being saved to be investigated by a judge?” posted X user @ns123abc, a comment that drew over a million views.
Another user, @kepano, added, “you can ‘delete’ a ChatGPT chat, however all chats must be retained due to legal obligations ?”.
Instead, OpenAI confirmed it has been preserving deleted and temporary user chat logs since mid-May 2025 in response to a federal court order, though it did not disclose this to users until yesterday, June 5th.
The order, embedded below and issued on May 13, 2025, by U.S. Magistrate Judge Ona T. Wang, requires OpenAI to “preserve and segregate all output log data that would otherwise be deleted on a going forward basis,” including chats deleted by user request or due to privacy obligations.
The court’s directive stems from The New York Times (NYT) v. OpenAI and Microsoft, a now three-year-old copyright case still being argued in which the NYT’s lawyers allege that OpenAI’s language models regurgitate copyrighted news content verbatim. The plaintiffs argue that logs, including those users may have deleted, could contain infringing outputs relevant to the lawsuit.
While OpenAI complied with the order immediately, it did not publicly notify affected users for more than three weeks, when OpenAI issued a blog post and an FAQ describing the legal mandate and outlining who is impacted.
However, OpenAI is placing the blame squarely on the NYT and the judge’s order, saying it believes the preservation demand to be “baseless.”
OpenAI clarifies what’s going on with the court order to preserve ChatGPT user logs — including which chats are impacted
In a blog post published yesterday, OpenAI Chief Operating Officer Brad Lightcap defended the company’s position and stated that it was advocating for user privacy and security against an over-broad judicial order, writing:
“The New York Times and other plaintiffs have made a sweeping and unnecessary demand in their baseless lawsuit against us: retain consumer ChatGPT and API customer data indefinitely. This fundamentally conflicts with the privacy commitments we have made to our users.”
The post clarified that ChatGPT Free, Plus, Pro, and Team users, along with API customers without a Zero Data Retention (ZDR) agreement, are affected by the preservation order, meaning even if users on these plans delete their chats or use temporary chat mode, their chats will be stored for the foreseeable future.
However, subscribers to the ChatGPT Enterprise and Edu users, as well as API clients using ZDR endpoints, are not impacted by the order and their chats will be deleted as directed.
The retained data is held under legal hold, meaning it is stored in a secure, segregated system and only accessible to a small number of legal and security personnel.
“This data is not automatically shared with The New York Times or anyone else,” Lightcap emphasized in OpenAI’s blog post.
Sam Altman floats new concept of ‘AI privilege’ allowing for confidential conversations between models and users, similar to speaking to a human doctor or lawyer
OpenAI CEO and co-founder Sam Altman also addressed the issue publicly in a post from his account on the social network X last night, writing:
“recently the NYT asked a court to force us to not delete any user chats. we think this was an inappropriate request that sets a bad precedent. we are appealing the decision. we will fight any demand that compromises our users’ privacy; this is a core principle.”
He also suggested a broader legal and ethical framework may be needed for AI privacy:
“we have been thinking recently about the need for something like ‘AI privilege’; this really accelerates the need to have the conversation.”
“imo talking to an AI should be like talking to a lawyer or a doctor.”
“i hope society will figure this out soon.“
The notion of AI privilege—as a potential legal standard—echoes attorney-client and doctor-patient confidentiality.
Whether such a framework would gain traction in courtrooms or policy circles remains to be seen, but Altman’s remarks indicate OpenAI may increasingly advocate for such a shift.
What comes next for OpenAI and your temporary/deleted chats?
OpenAI has filed a formal objection to the court’s order, requesting that it be vacated.
In court filings, the company argues that the demand lacks a factual basis and that preserving billions of additional data points is neither necessary nor proportionate.
Judge Wang, in a May 27 hearing, indicated the order is temporary. She instructed the parties to develop a sampling plan to test whether deleted user data materially differs from retained logs. OpenAI was ordered to submit that proposal by today, June 6, but I have yet to see the filing.
What it means for enterprises and decision-makers in charge of ChatGPT usage in corporate environments
While the order exempts ChatGPT Enterprise and API customers using ZDR endpoints, the broader legal and reputational implications matter deeply for professionals responsible for deploying and scaling AI solutions inside organizations.
Those who oversee the full lifecycle of large language models—from data ingestion to fine-tuning and integration—will need to reassess assumptions about data governance. If user-facing components of an LLM are subject to legal preservation orders, it raises urgent questions about where data goes after it leaves a secure endpoint, and how to isolate, log, or anonymize high-risk interactions.
Any platform touching OpenAI APIs must validate which endpoints (e.g., ZDR vs non-ZDR) are used and ensure data handling policies are reflected in user agreements, audit logs, and internal documentation.
Even if ZDR endpoints are used, data lifecycle policies may require review to confirm that downstream systems (e.g., analytics, logging, backup) do not inadvertently retain transient interactions that were presumed short-lived.
Security officers responsible for managing risk must now expand threat modeling to include legal discovery as a potential vector. Teams must verify whether OpenAI’s backend retention practices align with internal controls and third-party risk assessments, and whether users are relying on features like “temporary chat” that no longer function as expected under legal preservation.
A new flashpoint for user privacy and security
This moment is not just a legal skirmish; it is a flashpoint in the evolving conversation around AI privacy and data rights. By framing the issue as a matter of “AI privilege,” OpenAI is effectively proposing a new social contract for how intelligent systems handle confidential inputs.
Whether courts or lawmakers accept that framing remains uncertain. But for now, OpenAI is caught in a balancing act—between legal compliance, enterprise assurances, and user trust—and facing louder questions about who controls your data when you talk to a machine.