OpenAI Inc.

11/12/2025 | News release | Distributed by Public on 11/12/2025 08:26

Fighting the New York Times’ invasion of user privacy

November 12, 2025

Security

Fighting the New York Times' invasion of user privacy

Loading…
Share

Trust, security, and privacyguide every product and decision we make.

Each week, 800 million people use ChatGPT to think, learn, create, and handle some of the most personal parts of their lives. People entrust us with sensitive conversations, files, credentials, memories, searches, payment information, and AI agents that act on their behalf. We treat this data as among the most sensitive information in your digital life-and we're building our privacy and security protections to match that responsibility.

Today, that responsibility is being tested.

The New York Times is demanding that we turn over 20 million of your private ChatGPT conversations.They claim they might find examples of you using ChatGPT to try to get around their paywall.

This demand disregards long-standing privacy protections, breaks with common-sense security practices, and would force us to turn over tens of millions of highly personal conversations from people who have no connection to the Times' baseless lawsuit against OpenAI.

They have tried this before. Originally, the Times wanted you to lose the ability to delete your private chats. We fought that and restored your right to remove them. Then they demanded we turn over 1.4 billion of your private ChatGPT conversations. We pushed back, and we're pushing back again now. Your private conversations are yours-and they should not become collateral in a dispute over online content access.

We respect strong, independent journalism and partner with many publishers and newsrooms. Journalism has historically played a critical role in defending people's right to privacy throughout the world. However, this demand from the New York Times does not live up to that legacy, and we're asking the court to reject it. We will continue to explore every option available to protect our users' privacy.

We are accelerating our security and privacy roadmap to protect your data. OpenAI is one of the most targeted organizations in the world. We have invested significant time and resources building systems to prevent unauthorized access to your data by adversaries ranging from organized criminal groups to state-sponsored intelligence services.

However, if the Times succeeds in its demand, we will be forced to hand over the very same data we're protecting-your data-to third parties, including the Times' lawyers and paid consultants.

Our long-term roadmap includes advanced security features designed to keep your data private, including client-side encryption for your messages with ChatGPT. We believe these features will help keep your private conversations private and inaccessible to anyone else, even OpenAI. We will build fully automated systems to detect safety issues in our products. Only serious misuse and critical risks-such as threats to someone's life, plans to harm others, or cybersecurity threats-may ever be escalated to a small, highly vetted team of human reviewers. These security features are in active development and we will share more details about them, and other short-term mitigations, in the very near future.

The privacy and security protections must become more powerful as AI becomes more deeply integrated into people's lives. We are committed to a future where you can trust that your most personal AI conversations are safe, secure, and truly private.

-Dane Stuckey, Chief Information Security Officer, OpenAI

Answers to your questions

Why are The New York Times and other plaintiffs demanding this?

  • The New York Times is suing OpenAI. As part of their baseless lawsuit, they've demanded the court to force us to hand over 20 million user conversations. This would allow them to access millions of user conversations that are unrelated to the case.
  • We strongly believe this is an overreach. It risks your privacy without actually helping resolve the lawsuit. That's why we're fighting it.

What led to this stage of the process?

  • The Times' lawyers argued to the court that their request should be granted, in part because another AI company previously agreed to hand over 5 million private chats of their users in an unrelated court case.
  • We strongly disagree that this is relevant to our case and we're continuing to appeal.

Did you offer any other solutions to the Times?

  • We presented several privacy-preserving options to The Times, including targeted searches over the sample (e.g., to search for chats that might include text from a New York Timesarticle so they only receive the conversations relevant to their claims), as well as high-level data classifying how ChatGPT was used in the sample.
  • These were rejected by The Times.

Is the NYT obligated to keep this data private?

  • Yes. The Times would be legally obligated at this time to not make any data public outside the court process. That said, if the Times continues to push to access it in any way that will make the conversations public, we will fight to protect your privacy at every step.
  • The Times' original request in this lawsuit was also much broader. It initially demanded 1.4 billion private ChatGPT conversations, which we successfully pushed back on through the legal process. That presented red flags to us that suggested this was not a thoughtful or genuinely necessary request.

How are these 20 million chats selected?

  • The 20 million user conversations were randomly sampled from Dec. 2022 to Nov. 2024.

Is my data potentially impacted?

  • This data includes a random sampling of consumer ChatGPT conversations from Dec. 2022 to Nov. 2024.
  • Conversations outside of this time window are not potentially impacted.

Are business customers potentially impacted?

  • This does notimpact ChatGPT Enterprise, ChatGPT Edu, ChatGPT Business (formerly "Team") customers, or API customers.

What are you doing to protect my personal information and privacy?

  • We are taking all affected chats and running them through a de-identifying procedure to remove or "scrub" personal identifying information (or "PII") and other information (e.g., passwords or other sensitive information) from these conversations.
  • We would also push to only allow the Times to view this data in a secure environment maintained under strict legal protocols.

How will you store this data?

  • The content covered by the court order is currently stored separately in a secure system. It's protected under legal hold, meaning it can't be accessed or used for purposes other than meeting legal obligations.
  • Only a small, audited OpenAI legal and security team would be able to access this data as necessary to comply with our legal obligations.

Who will be able to access this data?

  • The Times' outside counsel attorneys of record in the caseand their hired technical consultants would be able to access the conversations. We will push to only allow The Times to view this data in a secured environment maintained under strict legal protocols.
  • If The Times continues to push to access it in any way that will make the conversations public, we will fight to protect your privacy at every step.

Does this court order violate GDPR or my rights under European or other privacy laws?

  • We are taking steps to comply at this time because we must follow the law, but The New York Times' demand does not align with our privacy standards. That is why we're challenging it.
  • As mentioned, we've taken additional steps to protect your privacy, such as de-identifying data and removing personally identifiable information.

Will you keep us updated?

  • Yes. We're committed to transparency and will keep you informed. We'll share meaningful updates, including any changes to the order or how it affects your data.
  • Policies and Procedures
  • 2025

Author

Dane Stuckey, OpenAI

Keep reading

View all
Understanding prompt injections: a frontier security challenge

SecurityNov 7, 2025

Introducing Aardvark: OpenAI's agentic security researcher

SecurityOct 30, 2025

Scaling security with responsible disclosure

SecurityJun 9, 2025

OpenAI Inc. published this content on November 12, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on November 12, 2025 at 14:27 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]