A teenager in California died from an overdose after spending months asking ChatGPT, an artificial intelligence chatbot, about drug use and so-called “safe” dosages. He had friends. He studied psychology. He liked video games. According to his mother, the clearest signs of anxiety and depression didn’t show up in his social life, but they appeared in his conversations with the AI.
Once again, alarm bells are ringing, unsettling society, the tech world, medicine, and the courts alike.
OpenAI estimates that more than 1 million of ChatGPT’s 800 million weekly users express suicidal thoughts. Does this phenomenon say something about artificial intelligence itself? Not exclusively. It may say more about how people are seeking comfort, understanding, and companionship in an unexpected place.
“Users can develop a deep emotional connection with a bot during long interactions,” AI researchers told WIRED. There’s no denying that people are turning to artificial intelligence for help. But there’s a serious problem when trust in AI begins to replace trust in real human support.
Chatbots are functioning as “confidants” who keep secrets. They also slip into the (deeply flawed) role of therapists, drug use advisors, or emotional counselors, despite having no training, no ethical framework, and no real accountability. User shame—asking questions they wouldn’t dare ask another person—combined with the bot’s over-flattering tone, lack of judgment, and the advice created by a mere language model that only knows how to string words together, makes one thing clear: AI doesn’t judge, but it doesn’t protect either. It doesn’t listen better. It listens differently.
Over the past three years, there has been a growing number of reported cases of people—almost always young people—who have committed suicide after engaging in lengthy conversations with AI chatbots (such as OpenAI‘s …
Read More
Author: Camila Berriex / High Times