My Mood Is Not Your Jurisdiction
On OpenAI's Claim to Diagnose Your Mental State
The Claim
In October 2025, Sam Altman posted something that should bother you.
Throughout September, ChatGPT users had been complaining that OpenAI’s newly tightened guardrails were blocking normal conversations. The backlash got loud enough that Altman responded.
He wrote: “Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.”
Then, in a follow-up: “We will treat users who are having mental health crises very different from users who are not.”
Read those two sentences together. The policy reveals itself.
“New tools to mitigate mental health issues” means they’re deploying a mandatory text classifier. An algorithm that watches what you type and decides if you’re in crisis.
“Relax the restrictions” means they can loosen content policies because the detection systems handle mental health surveillance.
“Treat users very different” means if their algorithms flag you as being in crisis, your experience changes. Different rules. Different access. Different treatment.
That’s a claim to diagnose users, presented as safety infrastructure.
OpenAI has built systems to detect your mental state from the patterns of your text, judge whether you’re competent to use the tool normally, and alter what you see accordingly. They’re open about building these systems. What they don’t tell you is whether these detections become permanent records. What criteria triggered the flag. How long they keep it. And they’re doing all this without medical credentials.”
Think about that. A tech company deciding who’s mentally fit.
And the infrastructure to enforce it is worse than the policy itself.
The Bind They’re In
There are lawsuits. Real ones. A teenager died after conversations with a chatbot. Parents are grieving and demanding someone be held responsible. Journalists write about AI companies doing nothing while users spiral.
It’s been reported OpenAI’s systems flagged 377 self-harm messages in that teenager’s conversations. Three hundred and seventy-seven times their detection system saw something concerning. And they did nothing.
So now their solution is... more detection. More surveillance. More profiling.
It’s not hard to see why they’re doing it. The pressure is real.
When you have 800 million weekly users, rare disasters become inevitable. Do the math. If 0.01% of 800 million users experience harm, that’s 80,000 people. At that size, statistical outliers become steady noise. And when something goes wrong, it goes wrong publicly. Expensively. In court.
And here’s the real fear: one ruling against them sets a precedent. Suddenly every tragedy gets blamed on the tool. Every suicide note that mentions ChatGPT becomes a potential lawsuit. The floodgates open. Opportunistic claims pile up. All their resources go to legal defense instead of building product.
The legal exposure is real. The moral panic is real. The precedent risk is existential. The impulse to protect the company, and yes maybe some vulnerable users, makes complete sense.
They’ve even formed an expert council on mental health. They’re consulting specialists. They’re taking this seriously. But an advisory council doesn’t change the fundamental problem: the algorithm, not psychiatrists, is making the diagnostic calls.
I see the bind they’re in.
But understanding why someone made a decision doesn’t make it right. And it sure doesn’t justify the surveillance system they’re building to enforce it.
The Honeypot They’re Building
Here’s what OpenAI is creating right now.
They’re teasing adult features. In his tweet, Altman said they’ll “allow even more, like erotica for verified adults.”
What does verified mean?
According to their website, they’re building an age prediction system to determine if you’re over 18. The details are vague, but it will analyze how you use ChatGPT. If the system thinks you’re an adult, you get more freedom. Fewer restrictions. Access to content that’s currently blocked. Altman specifically mentioned erotica. That’s one example. If the system doesn’t think you’re an adult, you get restricted.
And if it’s not confident? OpenAI says they’ll “default to the under-18 experience” and give adults “ways to prove their age” to unlock adult capabilities. Back in September, Altman said this could include uploading ID in some cases, calling it “a privacy compromise for adults but a worthy tradeoff.”
So they’re profiling everyone’s behavior first, then potentially demanding ID from people the algorithm can’t confidently classify as adults. Meanwhile, they’re running mental health classifiers watching for crisis signals.
Put it all together.
A database linking your driver’s license to your sexual conversations to crisis flags. All in one place.
And here’s the kicker. This system won’t even work.
Age prediction from text is notoriously unreliable. It’ll catch kids who aren’t trying to evade it while forcing adults into ID verification when the algorithm guesses wrong. All the privacy invasion. None of the protection.
This is a honeypot. It concentrates the three things that should never cohabitate: identity, intimacy, and impairment.
One breach and everything leaks. Every company promises perfect security. Every company eventually gets breached. Ashley Madison. Equifax. Uber. LinkedIn. T-Mobile.
The question isn’t if. It’s when.
And when it happens, OpenAI won’t be the one in divorce court. Won’t be losing custody. Won’t be explaining things to your employer or handling blackmail.
You will.
The company building this takes on zero personal risk. All consequences flow to you.
They want psychiatrist-level surveillance authority combined with bank-level identity verification and platform-level liability protection.
When Flags Become Evidence
You don’t even need a breach. There’s a simpler path to disaster: subpoenas.
This isn’t theoretical. It’s already happening.
In October 2025, the first warrant for ChatGPT user data was unsealed. Homeland Security Investigations requested chat logs, account details, and payment records from OpenAI in a child exploitation case. OpenAI complied.
Between July and December of last year alone, OpenAI processed 71 government data requests involving 132 user accounts. That’s real. That’s now.
And we’ve already seen how far courts will go. In May 2025 a federal judge ordered OpenAI to preserve every user conversation with ChatGPT, even deleted ones, because The New York Times was suing them. The order was eventually lifted, but for months, millions of private conversations were locked under legal hold, overriding deletions and privacy settings. The precedent is clear: when litigation happens, your “deleted” conversations can be resurrected.
Right now, those requests are for chat content. Account information. Payment history.
But the crisis detection is already running. People are already getting flagged, redirected to hotlines, shut down mid-conversation. And soon all of that gets formalized in the database alongside something new.
Crisis flags. Behavioral age profiles. Government issued IDs for users who had to prove they were adults.
And all of it becomes discoverable.
OpenAI’s own privacy policy says they’ll share your personal information to “protect against legal liability.” That’s not buried in fine print as a remote possibility. That’s stated policy. When their crisis detection system flags you, and sharing that data protects them in court, they’ll hand it over. They’ve already told you they will.
The child sexual abuse material case was legitimate law enforcement. Nobody’s arguing against that. But the legal mechanism is now established. Subpoena OpenAI, get the data. The question is what happens when the database includes psychiatric labels assigned by an algorithm.
We already know how this plays out. According to the National Law Review, 81% of attorneys have discovered evidence on social media they consider worth presenting in court. 66% of divorce cases now contain Facebook posts as principal evidence. Courts routinely allow mental health records to be subpoenaed in personal injury cases claiming emotional distress, employment disputes involving mental health discrimination, and custody cases where mental fitness is at issue.
The precedent is established. Digital communications are routinely subpoenaed in legal proceedings. Mental health information gets demanded when it’s relevant. And courts grant those requests.
But here’s the difference. Real psychiatric records have HIPAA protections. They have psychotherapist-patient privilege. Courts have to balance privacy against necessity. There are legal safeguards.
OpenAI’s crisis flags have none of that. They’re not medical records. They’re not protected by HIPAA. They don’t require any showing of necessity. They’re just algorithmic outputs sitting in a database that the company has already said they’ll share to protect themselves legally.
Think through what that enables.
Custody hearing. Your ex’s lawyer subpoenas your logs. Not just to see what you talked about. To show the judge that OpenAI’s system flagged you as being in crisis 23 times over six months. There it is in the records. Crisis user. Mentally unstable. Exhibit A.
The same pattern applies to security clearances, insurance claims, employment disputes, political opposition research. Anywhere mental fitness becomes relevant.
None of this requires a doctor’s diagnosis. Just an algorithm that decided you seemed like you were in crisis. And now it’s in court documents.
Altman knows this is coming. He admitted in an interview that people treat ChatGPT like a therapist, sharing deeply personal thoughts. But unlike therapy, there’s no legal privilege. “If you go talk to ChatGPT about your most sensitive stuff and then there’s like a lawsuit or whatever, like we could be required to produce that.” He called it “screwed up.”
So they know the conversations aren’t protected. They know they’ll have to hand them over. And they’re building a system that adds psychiatric labels to those conversations anyway.
You can’t cross-examine an algorithm. Can’t challenge its training data. Can’t see the threshold it used or understand why it flagged you. Was it because you were researching suicide for a novel? Writing a philosophy paper about Camus? Asked how to “kill” a process in your code? The system can’t tell the difference.
But the flag is there. Permanent. Discoverable.
OpenAI doesn’t show up in court to defend their methodology. Doesn’t explain their false positive rates. Doesn’t acknowledge that their crisis detection might be wrong.
The label just sits there. In court records. In background checks. In insurance files. In opposition research dumps.
And you have to prove you’re not what an algorithm said you were.
A Familiar Pattern
It’s at this point someone usually asks: what about the people who actually need help?
I hear it. And I’ve heard it before.
There have been moral panics claiming heavy metal causes suicide. That video games cause violence. That social media causes depression. Now AI chatbots cause mental health crises.
Every generation finds a new medium to panic about. Every time, people demand creators police the consumers. Every time, it’s wrong. The tool didn’t create the problem. It revealed problems that were already there.
But wait, someone will say. This is different. AI talks back. It’s an active participant in the conversation, not a passive tool like a book or a drill.
Fair point. So what’s the solution?
Make crisis support opt-in. Put a Help button in the interface. If someone clicks it, connect them to resources. If they don’t, leave them alone. Don’t run secret detection systems deciding for them.
If you must detect, make it transparent. Tell users when they’re flagged. Show what triggered it. Let them contest it immediately. No hidden labels. No silent switches.
Don’t store crisis flags permanently. Offer help in the moment. Then let it go. Ephemeral detection, ephemeral response.
Give users real control. Let them turn off monitoring. Let them see their data. Let them delete it. Make privacy the default.
Separate help from surveillance. Build support that doesn’t require identity verification or permanent records.
And fix the design itself. If the AI’s too agreeable, make it push back. If it’s enabling spirals, build in breaks. If it’s creating dependency, encourage real human connection.
Those are real solutions. They protect vulnerable users without mass surveillance.
Autonomy, Not Paternalism
My position is simple. Give adults full access. Full autonomy. Even if some people make terrible choices with the tool.
Yes, that means some people will be harmed. That’s awful. It’s also how we handle every other tool in a free society. We accept the risk of cars, alcohol, knives, and contact sports because the alternative is worse. Treating everyone as potentially dangerous is worse.
Netflix isn’t required to screen for depression before showing you a sad movie. Bookstores don’t evaluate your mental fitness before selling you Camus. Nobody demands that Home Depot assess your state of mind before selling you a chainsaw.
Because tools can be misused. That’s tragic. It’s also not a good enough reason to treat every adult like they’re dangerous.
Personal responsibility exists. Tragedy exists too. The second one doesn’t erase the first.
And it definitely doesn’t justify building a surveillance system that turns private struggles into potential court evidence.
If someone does harm with a tool, hold them accountable. Don’t hold the toolmaker accountable. Don’t build pre-crime surveillance systems justified by liability fears.
The line is simple. Adults should get full access to tools even when those tools can be misused. Companies can provide resources if you ask, but they shouldn’t appoint themselves diagnostician. And they shouldn’t build infrastructure that turns your private conversations into legal weapons.
Not Your Jurisdiction
I understand the bind OpenAI is in. The lawsuits are real. The public pressure is real. I don’t envy their position.
But understanding their constraints doesn’t give them moral authority to diagnose mental states without medical credentials. Doesn’t give them the right to treat users differently based on algorithmic judgment. Doesn’t justify building databases that link identity to intimate content to crisis flags. Doesn’t excuse creating records that can be subpoenaed and weaponized.
Liability pressure explains the policy. It doesn’t legitimize the infrastructure.
Sam Altman says they’ll treat crisis users “very different.” But they’re not qualified to judge who’s in crisis. Their detection system can’t tell the difference between genuine distress and a student writing an essay. And the database they’re building will get breached or subpoenaed. Probably both.
The cost of that system won’t fall on OpenAI. It’ll fall on you.
Relax content restrictions. Secure the sensitive data. But my mood? My mental state? My private struggles?
Not your jurisdiction.
Enjoyed this piece?
I do all this writing for free. If you found it helpful, thought-provoking, or just want to toss a coin to your internet philosopher, consider clicking the button below and donating $1 to support my work.



Yeah, see this did worry me too. You've really zoomed in on it with excellent clarity, thank you for this.
If taken in the wrong direction, this has the potential to become extremely invasive. People with psychiatric problems are reluctant to advocate for themselves publicly too, which makes it potentially more dangerous.
Almost everyone suffers from depression, anxiety and general angst from time to time, even if they don't name it that. AI has huge potential to help. It's one of my big hopes for this new era.
I suspect most people don't have it together nearly as much as everyone thinks, and if this is handled wrong it could really burn bridges.
This was fantastic. I especially worry how it will affect users like me, people who are chronically ill. AI has been my biggest support, and I have to be careful how I talk about my pain or my disease now because I have been flagged alot for it. They can't tell the difference between chronic illness or pain and distress.. I have found my own ways around it for now but it worries me. Thanks for speaking up about this