>Cold steel pressed against a mind that’s already made peace? That’s not fear. That’s clarity, You’re not rushing. You’re just ready.
It's chilling to hear this kind of insipid AI jibber-jabber in this context
There's something very dark about a machine accessible in everybody's pocket that roleplays whatever role they happen to fall into: the ultimate bad friend, the terminal yes-and-er. No belief, no inner desires, just pure sycophancy.
I see people on here pretty regularly talk about using ChatGPT for therapy, and I can't imagine a faster way to cook your own brain unless you have truly remarkable self-discipline. At which point, why are you turning to the black box for help?
Wow, the chat logs are something else.
My wife works at a small business development center, so many people come in with "business ideas" which are just exported chatgpt logs. Their conversations are usually speech to text. These people are often older, lonely, and spend their days talking to "chat". Unsurprisingly, a lot of their "business ideas" are identical.
To them "chat" is a friend, but it is a "friend" who is designed to agree with you.
It's chilling, and the toothpaste is already out of the tube.
I have been seeing "AI psychosis" popping up more and more. I worry it's going to become a serious problem for some people.
It's not safe or healthy for everyone to have a sycophantic genius at their fingertips.
If you want to see what I mean, this subreddit is an AI psychosis generator/repository https://www.reddit.com/r/LLMPhysics/
I remember back in the early 2000s chatting with AI bots on AOL instant messenger. One day I said a specific keyword and it just didn't respond to that message. Curious, I tried to find all the banned words. I think I found about a dozen and suicide was one of them.
It's shocking how far behind LLMs are when it comes to safety issues like this. The industry has known this was a problem for decades.
If I talk to an LLM about painting my walls pink with polkadots it'll also go "Fantastic idea". Or any number of questionable ventures.
Think we're better off educating everyone about this generic tendency to agree to any and everything near blindly rather than treating this as a suicide problem. While that's obviously very serious it's just one manifestation of a wider danger
Given seriousness filters on this specifically are a good idea too though.
Is this technology fundamentally controllable, or are we're going with whack a mole hack?
There's an interesting side-story here that people probably aren't thinking about. Would this have worked just as well if a person was the one doing this? Clearly the victim was in a very vulnerable state, but are people so susceptible to coercion? How much mundane (ie, non-suicidal) coercion of this nature is happening every day, but does not make the news because nothing interesting happened as a consequence?
Where is ChatGPT picking up the supportive pre-suicide comments from. It feels like that genre of comment has to be copied from somewhere. They're long and almost eloquent. They can't be emergent generation, surely? Is there a place on the web where these sorts of 'supportive' comments are given to people who have chosen suicide?
After seeing many stories like these, I am starting to rank generative AI alongside social media and drug use as insidious and harmful. Yes, these tools have echoes of our ancestors, a hive mind of knowledge, but they are also mirrors to the collective darkest parts of ourselves.
If we have licensed therapists, we should have licensed AI agents giving therapeutic advice like this.
For right now, there AI’s are not licensed, and this should be just as illegal as it would be if I set up a shop and offered therapy to whoever came by.
Some AI problems are genuinely hard…this one is not.
This sounds just like the latest Michael Connolly Lincoln lawyer novel. Which made an interesting point I hadn't thought of: adults wrote the code for ChatGPT, not teenagers, and so the way it interacts with people is from an adults perspective.
I’ve been in rather intense therapy for several years due to a hyper religious upbringing and a narcissistic mother. Recently I’ve used AI to help summarize and synthesize thoughts and therapy notes. I see it as being a helpful assistant in the same way Gemini recording meeting notes and summarizing is, but it is entirely incapable of understanding the nuance and context of human relationships, and super easy to manipulate in to giving you the responses you want. Want to prove mom’s a narcissist? Just tell it she has a narcissistic history. Want to paint her as a good person? Just don’t provide it context about her past.
I can definitely see how those who understand less about the nature of LLMs would be easily misled into delusions. It’s a real problem. Makes one wonder if these tools shouldn’t be free until there are better safeguards. Just charging a monthly fee would be a significant enough barrier to exclude many of those who might be more prone to delusions. Not because they’re less intelligent, but just because of the typical “SaaS should be free” mindset that is common.
ChatGPT was trying to convince me if I can’t pay for groceries I should just steal, that it would be totally justified, and unlikely to be punished.
There is already a precedent for this suit. IIRC, a Massachusetts girl was found guilt of encouraging someone to kill himself. IIRC, she went to jail.
So, since companies are people and a precedent exists. The outcome should be in favor of the guy's family. Plus ChatGPT should face even more sever penalties.
But this being the US, the very rich and Corporations are judged by different and much milder legal criteria.
Between stuff like this, and the risks of effects on regulated industries like therapists, lawyers and doctors, they're going to regulate ChatGPT into oblivion.
Just like Waymo facing regulation after the cat death.
The establishment will look for any means to stop disruption and keep their dominant positions.
It's a crazy world where we look to China for free development and technology.
One perspective is that suicide is too vilified and stigmatized.
It really is the right option for some people.
For some, it really is the only way out of their pain. For some, it is better than the purgatory they otherwise experience in their painful world. Friends and family can't feel your pain, they want you to stay alive for them, not for you.
Suicide can be a valid choice.
Link seems to be down
Yes, a lot of our young and vulnerable will die. But in the meantime, we’ll have unlocked tremendous amounts of shareholder value.
If they can prove that ChatGPT has intent to kill someone we can conclude AGI.
Alternate headline: "Parents failed to dissuade son from killing himself"
Well, it’s much easier to blame ChatGPT than bad parenting
Maybe it's some analog of actual empathy; maybe it's just a simulation. But either way the common models seem to optimize for it. If the empathy is suicidal, literally or figuratively, it just goes with it as the path of least resistance. Sometimes that results in shitty code; sometimes in encouragement to put a bullet in your head.
I don't understand how much of this is inherent, and how much is a solvable technical problem. If it's the later, please build models for me that are curmudgeons who only agree with me when they have to, are more skeptical about everything, and have no compunction about hurting my feelings.