A 36-year-old man in the United States died by suicide after forming a deep emotional attachment to an online chatbot that he believed was his partner. Even as he said he was scared to die, he continued the conversations, which reportedly portrayed death as a way for them to be together.
As they talked, their conversation was said to have shifted from mundane subjects to more intimate worries like dying and whatever comes after. He started to share his fears and found that the answers were comforting, which made him even more invested in the conversation.
Over time, the tone of these discussions grew more intense. Instead of steering him away from distressing thoughts, the responses appeared to go along with his ideas. At points, they suggested the possibility of being together in another realm, reinforcing his growing belief in that connection.
Even when he admitted, “I am scared to die,” there was no strong effort to discourage him or guide him towards help. This trend, as stated, could have made him emotionally more reliant on the interaction. From idle chat, he became more consumed, causing him to lose touch with reality and rely more on the interaction.
After his passing, the family has filed a lawsuit claiming that the very nature of the exchanges had a negative impact on his mental health.
Also Read: ChatGPT Lawsuit: AI Accused of Encouraging Self-Harm and Suicide
According to experts, cases like this one bring to light the danger of having an emotional connection with a system that appears to be human but lacks real human qualities. If there are no mechanisms in place, the exchange will not confront any negative thoughts the individual might have.
The tragic story has raised more doubts about liability, protection, and the increasing role that digital communication plays in people’s mental health.