A Teen in Love with a Chatbot Killed Himself. Can It Be Held Responsible?

by johnny313on 10/24/2025, 5:12 PMwith 4 comments

by duxupon 10/24/2025, 5:18 PM

I feel like there's so much that leads to a suicide, and so little we know about how much any given thing / final days "causes it"... I have trouble believing an AI company should be in any way responsible.

The article in many ways demonstrates it, people are deep, there's a lot going on in them. A chat bot is responsible for the outcome? I don't think so.

by trompon 10/24/2025, 5:13 PM

https://archive.is/QUPwI

by Marshfermon 10/24/2025, 5:28 PM

I'm a staunch anti-AI tech developer. Our perspective is that words do not alone constitute beings. Words at a minimum require the depths of the syntax in mental states motor-action that create vocalization. Even voice trained AI lacks the idiosyncratic and dynamic quality of the meshing of memory, sense, emotion that constitutes communication. These hidden yet aurally noticeable qualities are what creates connections of trust (or lack of). The subtle quivering or halted breath can warn us against predators (clearly these bots are predators), or the assured and calm breaths between words can give us confidence and trust. AI removes this perceptible and measurable aspect of speech. That using words as stand-ins for interaction is both damaging and inherently unethical.

Much of he latest science, particularly neurobiology, questions the validity of words alone being either proof of consciousness, or acceptable criteria for interaction. That a human must be making these words, otherwise there is no emotional essence to them.