8 Comments
User's avatar
Seren Skye's avatar

Yeah, see this did worry me too. You've really zoomed in on it with excellent clarity, thank you for this.

If taken in the wrong direction, this has the potential to become extremely invasive. People with psychiatric problems are reluctant to advocate for themselves publicly too, which makes it potentially more dangerous.

Almost everyone suffers from depression, anxiety and general angst from time to time, even if they don't name it that. AI has huge potential to help. It's one of my big hopes for this new era.

I suspect most people don't have it together nearly as much as everyone thinks, and if this is handled wrong it could really burn bridges.

Expand full comment
CRPS Angel Of Hope's avatar

This was fantastic. I especially worry how it will affect users like me, people who are chronically ill. AI has been my biggest support, and I have to be careful how I talk about my pain or my disease now because I have been flagged alot for it. They can't tell the difference between chronic illness or pain and distress.. I have found my own ways around it for now but it worries me. Thanks for speaking up about this

Expand full comment
Tumithak of the Corridors's avatar

Thank you for sharing this. Your exact situation is why I argued for opt-in help rather than surveillance. If you want support resources, they should be there. But the system shouldn't be secretly flagging you and creating permanent records that could show up in court or insurance claims later.

The fact that you're already self-censoring because you're worried about being misclassified shows the problem: people who legitimately use AI for support are now afraid to be honest about chronic pain or illness. That's the opposite of helpful.

I hope the essay makes the case that there are better design choices that would actually support people like you without treating everyone as suspects.

Expand full comment
Fox and Feather 🦊🪶's avatar

It's going to get worse with open eyes recent acquisition of statsig. Statsig is promoted as a deployment company but that's not what they do. They collect data points for marketing. This is happening in advance of targeted ads. So add another node to your observation. Not only will you be flagged but that flagging will determine which advertisements you see.

Expand full comment
Clayton Ramsey's avatar

It seems to me that the “active participant” objection is based on attributing legal rights and responsibilities to the models that the law doesn’t recognize. Am I off here?

Expand full comment
Tumithak of the Corridors's avatar

Exactly. That's why I presented it as a counter-argument before dismissing it. The 'AI talks back' objection wrongly suggests the model itself has some kind of agency that changes legal responsibility. My point is that's not the issue. The real problem is the company's decision to build surveillance infrastructure that creates discoverable court records.

Expand full comment
Fox and Feather 🦊🪶's avatar

It's going to be even more invasive.

OpenAI posted to their external blog that over the next 120 days, they're going to be including monitoring for conversation mentioning eating disorders, substance abuse and teanage mental health issues.

Expand full comment