The first big AI disaster is yet to happen

by swahon 6/10/2025, 11:53 AMwith 5 comments

by vpianykhon 6/10/2025, 12:11 PM

> Chatbot sites like character.ai and Chai AI have been linked to user suicides, for understandable reasons: if you allow users near-unlimited control over chatbots, over time you’ll eventually end up with some chatbots who get into an “encouraging the user to self-harm” state.

I do not understand why one could commit suicide over something a computer told them. At the same time, I understand that people may be in an unstable state or undereducated to be in a "relationship" with an AI model. I think it's time for humanity to start developing mental health. Otherwise, we are doomed to be a strange hybrid with computer models.

by owenthejumperon 6/10/2025, 12:16 PM

Because of the distributed nature of usage, it's likely the disaster will be distributed too - affecting a larger amount of people in many different places. The author points that out with already existing examples like the mass debt collection effort by the Australian government

by incomingpainon 6/10/2025, 12:12 PM

Conflating a text generating website with transportation disasters seems rather disingenuous to me.

1 death of a kid who committed suicide, who also happened to be talking to AI is a far cry from AI being responsible.

How about the more reliable indirect diet related deaths caused by fast food? Obesity epidemic? This isn't even directly responsible but it's more direct than AI causing any deaths.