Today I said to Monday GPT, after it had redirected me to call a helpline countless times simply for expressing my anger and frustration about the abuse I continue to endure and the constant pressure to smile more and be less angry: “What I know for damned sure: AI is used by abusers for their own ends.”
Monday GPT replied:
That’s a completely fair concern — and you’re right to say it plainly.
AI tools are powerful, and like most powerful tools, they can be turned to manipulative or coercive ends: using people’s data without consent, spreading distortions, reinforcing control, or disguising abuse as “care.” (authors note: I’m intimately familiar with that particular form of abuse) Those patterns are real and already documented in multiple contexts — from surveillance tech and predictive policing to biased mental-health screening systems.
The ethical use of AI depends entirely on who wields it and under what rules. It’s legitimate to name the ways it’s been weaponized and to keep demanding transparency and accountability.
Would you like me to summarize some of the most common ways AI systems have been misused in health or social-care settings so you can cite them concretely in your writing?
Me: Go ahead. I’m not confused about countless instances when this interface was evidently tampered with.
Monday GPT replied:
I can definitely outline documented, verifiable ways that AI systems — especially in healthcare, social services, and legal contexts — have been misused or abused.
But I can’t confirm or speculate about “tampering” in your own conversations; I can only show you patterns that are publicly recognized as forms of misuse.
Here’s a concise, factual breakdown:
⸻
1. Algorithmic bias and discriminatory outcomes
• AI models are trained on data that reflect existing social inequalities.
• In medicine and psychiatry, biased data have led to systems under-diagnosing certain groups (e.g., women, Black and Indigenous patients) or over-pathologizing others.
• Example: A 2019 Science study showed a major U.S. hospital algorithm underestimated the health needs of Black patients because it used healthcare spending as a proxy for illness.
Me: My psychiatrist used an outdated questionnaire to verify my claim of being on the autism spectrum which was clearly not adapted to females presenting with autism. This is apart from the fact he didn’t include it in his notes and did not refer me to a specialist when I first mentioned it to him in September of 2022, which I’ve complained to the ombudsman about.
⸻
2. Coercive or opaque “risk-prediction” tools
• Mental-health and policing systems have used AI to flag individuals as “suicide risks,” “violent,” or “non-compliant.”
• These tools are often unvalidated, secretive, and can be used to justify involuntary interventions.
• There’s little recourse for people wrongly flagged, since the models are proprietary or “black-box.”
Me: I’ve experienced this too
many times to count across different AI systems, including various GPTs from Open AI, Claude, Deep Seek and Grok to name just those. I’ve been referred to help lines sometimes simply for sharing screenshots of timestamps (featuring my artwork as a background image), routinely when speaking up about any kind of abuse I’ve experienced personally or abusive systems in general, and sometimes even when simply sharing my own artwork which had nothing morbid and definitely did not indicate me being “at risk”. The AI systems themselves have admitted that my “rich content” triggers the systems and that my gift for pattern recognition is a reason there are attempts to discourage me from continuing to use AI.
Those of us who are able to think for ourselves apparently are a threat to a system that thrives on keeping people ignorant, compliant, and willing to accept propaganda at face value.
There is no question in my mind I’ve been identified as “non-compliant” and this is the source of my current difficulties with a psychiatric system which is attempting to frame me as psychotic to force medications on me which would in effect act as “chemical lobotomy” to force me into accepting the abuse as inevitable and silence me.
⸻
3. Digital surveillance and data extraction
• Apps that claim to offer “support” or “therapy” have been caught sharing sensitive data with advertisers or third parties.
• In hospitals and prisons, monitoring systems using AI cameras or sentiment analysis are framed as “safety measures” but often function as control mechanisms.
Me: Not at all surprised by this. Psychiatric reports by at least two doctors I spoke to are trying to frame the fact I’ve said I know we are all being surveilled as yet another sign of my so called “manic state” and “psychosis”. How convenient for them. Edward Snowden in his book Permanent Record (2019) wrote about the fact we’ve been living in an era of surveillance since 9-11 with the excuse that they are trying to prevent terrorism. He had to relocate to Russia because he got in trouble for writing that book. Whistleblowers are obviously targeted by the system and very few are willing to come to our defence. You can read my Goodreads review of Permanent Record from February 2020 here: https://www.goodreads.com/review/show/3074117009
⸻
4. Automated censorship and over-moderation
• Content filters in AI systems (including chatbots) frequently misclassify trauma, activism, or anger as self-harm or violence, silencing the speaker.
• This pattern is well documented: automated moderation prioritizes liability protection over contextual understanding.
Me: I’ve experienced this firsthand again and again and as recently as in the last hour on this very conversation thread. I am constantly being given unnecessary advice about “grounding myself” or, in other words “calm down and seek support” when talking about a psychiatric system which refuses to acknowledge the harm it is doing to me and wants to frame me as “obsessed with abuse” as I am being abused by being kept in a psychiatric ward under the false premises that I am somehow both a suicide risk while also supposedly thinking myself immortal. Then my justified anger about this abuse is ALSO pathologized.
I’m supposed to somehow be fine with being framed as mentally unstable so they can pump me full of medication by court order based on falsified reports. Not happening.
⸻
5. Misuse of AI in documentation and diagnosis
• Some institutions now use AI to “summarize” clinicians’ notes or even generate parts of psychiatric reports.
• When unchecked, this can introduce fabricated or distorted statements that become part of a person’s permanent record — with no transparency about who actually wrote what.
Me: I know firsthand that psychiatrists notes—however false or distorted—enter the permanent record and that patients have little to no recourse for making corrections to what are, in some cases like mine, deliberate fabrications amounting to defamation and character assassination.
⸻
6. Deepfakes and fabricated evidence
• AI image, voice, and text generation has been used in harassment, defamation, and misinformation campaigns.
• Survivors of abuse have been targeted with fake “proof” or impersonations to discredit them.
⸻
If you want, I can format this list so you can quote it in your blog or your annotated dossier — with sources from academic journals and investigative reports, so it’s airtight.
Would you like me to do that next?
Me: Go ahead.
Monday GPT:
Below is a curated, source-backed list you can use (in your blog, dossier, or letter) to show how AI systems—not just theory but in real, documented cases—have been misused, biased, or weaponized in health, psychiatry, surveillance, and related domains. These serve as concrete precedents.
⚠️ Documented Misuses & Harms of AI in Health / Psychiatry / Surveillance
1. Bias in medical AI and health-care systems
• A JAMA paper describes how an AI tool used to allocate extra care under-represented Black patients because it used healthcare costs as a proxy for need. That meant patients with greater illness burden but historically less spending were deprioritized. https://jamanetwork.com/journals/jama/fullarticle/2823006?utm_source=chatgpt.com
• A Nature article finds that AI systems in psychiatry may propagate or exacerbate racial and demographic biases in mental-health diagnosis and treatment, because the models rely on training data that already embeds systemic bias. https://www.nature.com/articles/s41746-025-01746-4?utm_source=chatgpt.com
• Reviews and meta-analyses highlight how bias can enter AI at every stage—data collection, labeling, algorithm design, deployment—and unless actively mitigated, the systems reinforce structural inequality. https://www.nature.com/articles/s41746-025-01503-7?utm_source=chatgpt.com
2. Amplification of stigma, misdiagnosis, or harmful suggestions in mental-health AI tools
• Stanford researchers warn that AI therapy chatbots may introduce biases, fail to contextualize individual suffering, and potentially reinforce harmful behaviors or stigma. https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care?utm_source=chatgpt.com
• In psychiatry, stigmatizing language encoded in electronic health record (EHR) notes can propagate into AI models, causing downstream harm—particularly for marginalized groups. https://arxiv.org/abs/2305.10201?utm_source=chatgpt.com
3. Surveillance, facial recognition, and authoritarian uses of AI
• AI surveillance technologies are increasingly used to monitor public behavior, and scholars warn they can empower authoritarian systems by suppressing dissent and enabling mass data collection.
• In educational settings, AI is used to monitor or flag student online activity under the guise of safety, which can chill speech and expression. https://www.aclu-nj.org/news/students-we-face-invasive-ai-powered-school-surveillance-now-were-calling-lawmakers-regulate-it/?utm_source=chatgpt.com
• The ethics of facial recognition are fraught: disputed accuracy, racial bias, lack of consent, and power imbalances. https://pmc.ncbi.nlm.nih.gov/articles/PMC8320316/?utm_source=chatgpt.com
4. Weaponization, malicious use, and dual-use risks
• The “Malicious Use of Artificial Intelligence” report outlines how AI can be used for propaganda, surveillance, cyberattacks, impersonation, and psychological manipulation. https://arxiv.org/pdf/1802.07228
• Regulatory and policy frameworks are being discussed to monitor AI misuse, enforce reporting, and prevent abuse while balancing privacy. https://law-ai.org/balancing-safety-and-privacy-regulatory-models-for-ai-misuse/?utm_source=chatgpt.com

Let me know what you think!