The Paper and The Panopticon
How The New York Times Became a Gatekeeper of Voices It Doesn’t Own
The Archive No One Asked For
You thought it was private. Now it’s potential evidence.
Recently, while scanning the headlines for AI news, I stumbled across a quiet story about user privacy and corporate control. The kind that should have made front pages.
On May 13th, 2025, a federal judge issued an order with sweeping consequences: OpenAI must preserve every user conversation with ChatGPT. Not just going forward. Retroactively. Indefinitely. Even the ones users deleted.
Why? Because The New York Times is suing OpenAI and Microsoft for copyright infringement.
If you’ve ever asked ChatGPT about a medical concern, vented after a breakup, outlined a novel, or jotted down a business idea, that conversation may now sit in a segregated legal hold on OpenAI’s servers. It wasn’t retained to address wrongdoing or respond to a specific complaint; it’s being held on the off chance that your words might someday support someone else’s lawsuit.
As of this writing, over 122 million people use ChatGPT every day. None of them signed up to be evidence.
This wasn’t just a preservation order. It was a mass deputization. No public notice. No warning to users. Overnight, a sprawling archive of private expression was locked away.
This isn’t just a lawsuit. It’s a raid on privacy masquerading as copyright enforcement.
And it demands an answer.
The Lawsuit in Plain English
One judge’s order froze millions of private conversations.
So what exactly is the Times accusing OpenAI and Microsoft of? Not direct plagiarism. Not article theft. Their concern is that ChatGPT might, in some cases, produce language similar to paywalled material, even if that similarity stems from a user prompt rather than from training data. A stray summary here. A familiar phrase there. No concrete evidence. Just possibilities.
Their argument is not just about what ChatGPT says; it is about how it learned to say it. They claim OpenAI used Times articles without permission to train the model. To support this, they point to examples where ChatGPT generates phrasing that resembles their content. Not full duplication. Not plagiarism. Just the possibility that a prompt might trigger something too familiar.
On the strength of this speculation, the court ordered OpenAI to preserve everything: every prompt, every reply, every discarded thought. Even the ones users tried to erase.
It’s worth noting: the court didn’t order OpenAI to hand over these chats, at least not yet. The logs remain on OpenAI’s servers. But the preservation order locks them in place, overriding user deletions and suspending retention limits. In practice, this makes them available for subpoena. The government doesn’t need to store them directly; it just needs to keep the vault open and the key within reach.
In a blog post on June 5, OpenAI confirmed that it has been preserving deleted and temporary ChatGPT sessions since mid-May, in compliance with the court’s May 13 preservation order. The order applies to nearly all users, including Free, Plus, Pro, and most API customers, but excludes Enterprise, Education, and those using Zero Data Retention (ZDR) endpoints.
Although OpenAI complied with the order immediately, it did not publicly notify users for more than three weeks. The company has since criticized the demand as sweeping and unnecessary, stating that it conflicts with its privacy commitments and long-standing data deletion policies.
The preservation order rests on a chain of hypotheticals. The Times argues its articles were used to train the model. If so, then it is possible that traces of that training still live on in the outputs. And if that is true, some of those echoes might still be buried in logs users believed were gone.
OpenAI, for its part, maintains that its use of data falls within the bounds of fair use.
Meanwhile, in the real world, bypassing paywalls is child’s play. Browser extensions. Cached pages. Reddit reposts. And the old standby: archive.today, a human-run website that lets anyone access paywalled content in seconds. No AI required. No subpoenas needed.
And yet, here we are. Because OpenAI might have trained on Times articles, the court ordered it to preserve every user conversation.
So we’re forced to ask: how does a media institution that once prided itself on defending press freedom justify dragging millions of private conversations into legal discovery?
This isn’t safeguarding journalism.
It’s a privacy nightmare disguised as an evidence vault.
The Gatekeeper’s Panic
This isn’t about journalism. It’s about control.
The New York Times once stood as a steward of public truth: a watchdog of power, a champion of transparency, the self-declared paper of record.
But now, the same institution is orchestrating one of the most sweeping privacy incursions of the AI era. In its suit against OpenAI, the Times is not merely challenging a tech company; it is dragging millions of users into court. Deleted drafts, midnight confessions, creative sparks, and personal grief have all been sealed away as potential evidence. The paper that once championed transparency now calls for a surveillance regime to protect its market position.
Their claim? That ChatGPT might sometimes match the tone or phrasing of their paywalled content. Not copy it, just resemble it. And on this speculative harm, they demanded a preservation order that quietly commandeered the data of 122 million users, no notice given.
The hypocrisy is staggering. While demanding the preservation of millions of private chats, the Times has also been quietly licensing its archives to tech platforms for AI training. When money speaks, their high-minded rhetoric vanishes.
This isn’t about plagiarism. It’s about power.
Plagiarism is about passing off someone else’s words as your own, a charge that doesn’t hold here since ChatGPT doesn’t quote The Times verbatim. Copyright, however, is murkier in the age of AI. It hinges less on exact duplication and more on who gets to train on what, and whether mere influence should count as infringement.
The Times claims that AI threatens journalism’s lifeblood, and that without paywalls, the truth dies. But let’s be honest. This isn’t about protecting investigative reporting. It’s about preserving relevance.
If the real goal were to defend journalism, they wouldn’t be licensing their archives to Amazon while insisting on the preservation of millions of private conversations.
The contradiction is hard to miss. Their outrage looks less like principle and more like performance, and more and more, the public sees through it.
Generative AI has broken the monopoly on voice. People are writing things they never could before: letters, essays, scripts, and with fluency once reserved for elites. If anyone can sound like a columnist, what does it mean to be one?
Where I argued for democratization in A New Kind of Voice, they are fighting for recentralization: to yank back control over who gets to sound “credible.”
That’s what this lawsuit protects: the gate.
And if keeping that gate means burning the illusion of privacy? So be it. Even deleted whispers must now stand trial. All to defend an old scarcity model.
Martin Luther didn’t ask permission to translate the Bible. He broke the monopoly on truth so others could read. If that spirit is criminal now, then every librarian is an outlaw and every teacher a thief.
And maybe that’s exactly how the Times sees it.
Collateral Capture
No one consented to be evidence.
This isn’t just a case against OpenAI anymore. It reaches everyone who’s ever spoken to the machine.
When Judge Ona Wang granted the preservation order, she extended the lawsuit’s reach from corporate conduct to user content. The ruling didn’t just preserve evidence from OpenAI; it swept up private conversations from everyday people who never expected to be part of a legal case.
This isn’t just about casual questions typed into a chatbot. It’s about people opening up to what they believed was a private space, trusting it with their thoughts and vulnerabilities. Writing about things like:
Medical concerns, from dosage questions to early symptom checks
Creative work, including first drafts of novels, song lyrics, and screenplays
Personal pain, shared during moments of heartbreak, trauma, or grief
Confidential business information, submitted by API users who assumed it would be erased, not stored
These weren’t public posts. They weren’t shared willingly. Many of them were explicitly deleted by the user, or submitted with the expectation that they’d disappear per OpenAI’s stated policies. Some were never retained at all, until now.
Even users who took extra precautions were not spared. OpenAI offers a “Temporary Chat” mode that, by design, does not store history, does not train the model, and disappears after a short period. The interface itself informs users, “we may keep a copy of this chat for up to 30 days.” Many users, including myself, also explicitly disabled data sharing in the settings, choosing not to allow their conversations to be used for training.
That was the deal: privacy by design, deletion by default.
And it was not just marketing language. OpenAI’s own privacy policy reiterated that deleted chats would be removed from their systems after 30 days, and that users could opt out of data training entirely.
The court order sweeps all of that aside. It overrides the settings users trusted, and the safeguards they counted on. Not because of any wrongdoing, but because someone else filed a lawsuit.
Now, thanks to a single court order, all of it is caught in a vast roundup of evidence: private conversations seized on little more than the possibility that AI might have paraphrased a paywalled article.
This wasn’t a subpoena; it was a preemptive lockdown. A data dragnet in all but name.
And the users? They never consented to become evidence.
On June 6th, OpenAI filed a motion to vacate the preservation order, calling it an “overreach” that conflicts with its privacy commitments.
Sam Altman, OpenAI’s CEO, publicly framed the issue as a call for “AI privilege”. A new kind of confidentiality akin to attorney-client or doctor-patient protections. Whether courts agree remains to be seen, but the very fact such a privilege must now be argued suggests how deep the intrusion has become.
Regulatory Collision
One court’s order may violate another continent’s laws
This isn’t just ethical overreach. It may be outright illegal.
Under the EU’s General Data Protection Regulation, citizens have a right to be forgotten.
OpenAI promised to purge chats after 30 days. But the May 13 order sweeps that promise aside, forcing OpenAI to retain every conversation, even those EU users explicitly asked to delete.
Comply, and OpenAI risks fines up to €20 million or 4 percent of global revenue; defy, and it faces contempt of court.
In the worst case, it might have to pull out of Europe entirely. An exit driven not by innovation’s failures, but by a legal tug-of-war that leaves innocent users stranded.
Add to that the contractual betrayal: API customers were assured they owned and controlled their data: inputs wouldn’t be stored, outputs wouldn’t be logged. Unless they negotiated a Zero Data Retention (ZDR) agreement, their information is now held against their will, simply because it might be relevant to someone else’s lawsuit.
And let’s not forget the technical truth: “delete” rarely means “obliterate.” Backups, caches, and system logs often keep data alive for months. The court’s order demands preservation of every last copy, even those never intended to be permanent.
Here’s the cruel twist:
The New York Times didn’t just target OpenAI or Microsoft; it conscripted every user, turning private conversations into legal collateral, even for people who never read a Times article or summarized a single one.
They trusted the promise: Your data will be deleted.
Now their words are evidence. Held not in a courthouse, but on OpenAI’s servers. Locked away under legal hold. Turned into kindling for a lawsuit they never chose.
Privacy Is Not a Casualty. It’s the Target
Legal discovery is no longer a scalpel. It’s a net.
This isn’t collateral damage. Privacy was never just caught in the crossfire. It was always in the crosshairs.
We’ve grown used to being watched. Surveillance capitalism taught us that our clicks and keystrokes could be monetized, sold for toothpaste ads and targeted playlists. But this is different. The watchers aren’t advertisers anymore. They’re plaintiffs. And they’re using the courtroom to turn your private data into legal ammunition.
Discovery, once a surgical tool for uncovering specific evidence, is now being repurposed as a rationale for total data retention. Under this logic, “delete” no longer means delete.
It means hold for court.
It means your words aren’t yours anymore.
It means your right to be forgotten expires when someone else files a lawsuit.
That should unsettle everyone. Because what’s happening here is a quiet but seismic shift: the boundary between privacy and legal discovery is dissolving. And suddenly, conversations that felt as intimate as a diary entry are being treated like corporate records.
What Was Yours Is Theirs
Control slips.
It doesn’t stop at inconvenience. This undermines the core idea that some things: our thoughts, our pain, our first drafts, are ours alone. It sets a precedent, and not a good one.
If your AI is your therapist, your sounding board, your coping mechanism, what happens when it’s subpoenaed? What happens when trauma shared in confidence becomes admissible in court? You weren’t talking to a courtroom. You were talking to a machine that promised to forget. Now we’re learning: maybe it never did.
There are thousands of people who’ve used ChatGPT to process grief, navigate panic attacks, or rehearse the words they were too scared to say out loud. They weren’t talking to a courtroom. They were talking to a machine that promised to forget.
These weren’t just expectations, they were rights. OpenAI’s own privacy policy spells it out clearly: users may request to “delete [their] Personal Data from our records” or “withdraw [their] consent” entirely.
Many users did. They turned off training. They cleared their chat history. Some even contacted OpenAI directly, exercising their right to be forgotten, a right that exists in law, not just settings menus.
But none of that mattered when the court order came. Even data marked for deletion, even chats from users who had opted out, were saved. Not because users broke the rules, but because The New York Times asked the court to override them.
So we have to ask: if conversations with machines can be seized, if our private thoughts are now fair game in litigation, can anything we say to technology ever be truly ours again?
And if not… how do we ever trust it again?
There are ways through this, but they require courage and clarity.
The Times could follow its own example of smart licensing; striking deals that compensate creators without dragging users’ private conversations into discovery.
Courts could adopt proportional standards that target only genuinely relevant data, preserving both justice and privacy.
And OpenAI must commit to data practices, encryption, automatic expiration, auditable deletion, that cannot be overridden by litigation.
What They Really Fear
The loss of gatekeeping.
This isn’t only about revenue. If it were, they’d have brokered a low-profile licensing deal instead of mounting a mass preservation order.
What The New York Times fears isn’t lost revenue. It’s lost relevance.
For over a century, they’ve stood as arbiters of what counts as credible, real, authoritative. If they said it, it was fact. If they printed it, it mattered. They weren’t just reporting the story, they were deciding which ones deserved to be told.
But now, the gates are rusting.
AI tools, flawed as they are, have democratized expression. They’ve handed people the instruments of eloquence, of clarity, of persuasion. Not perfectly, not always wisely, but widely.
And that is the threat.
Not that a chatbot might regurgitate a paragraph from behind a paywall, but that anyone, anywhere, can now sound informed. Reflective. Articulate. Without kneeling at the altar of legacy media.
That’s why they cry “theft.”
Because if intelligence is no longer scarce, if language is no longer rationed,
then what becomes of those who built their power on scarcity?
This lawsuit isn’t about protecting authors.
It’s about protecting authorship, not the craft, but the credential.
The notion that truth must be sanctioned. That style must be certified.
That only certain voices get to echo across the culture.
They’re not defending journalism.
They’re defending the monopoly on voice.
The Mirror, Cracked
They feared what it revealed.
They sued the mirror because they didn’t like what it showed.
That’s the heart of it, isn’t it?
AI didn’t shatter their world. It reflected the cracks that were already there. The illusion of control. The brittle gatekeeping of voice. The hollow rituals of credentialed truth.
The machine didn’t cause the break.
It just made it visible.
In A New Kind of Voice, I argued that what we call originality is often just permission: permission to speak, to be heard, to take up cultural space.
And now, because the machine grants that permission without precondition, the old chorus masters are clutching their hymnals in panic.
But their panic is not our prison.
They may freeze the logs.
They may subpoena our thoughts.
But they cannot un-teach us what we’ve learned:
That our voices matter.
That we no longer need their pulpits to speak, or their paper to be heard.
So yes, be angry. Be loud. But most of all, be awake.
Because this isn’t just a fight over copyright.
It’s a fight over consent. Over expression. Over the right to vanish, to whisper, to begin again.
We don’t have to accept a world where “delete” means “defer to power.”
We can demand better.
We can build it ourselves.
The gates are open now.
Let’s not just walk through them.
Let’s tear down the fence.
Enjoyed this piece?
I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating $1 to support my work.
It's amazing how all of our privacy is essentially eroded, like rocks on the beach, by courts and corporations, over and over. It's a slow erosion that leads us to not having anything resembling freedom in the end.
Good article.