AI-enhanced tragedy: A former Yahoo employee killed his mother after receiving support from ChatGPT
Author: Hovhannes Torosyan | 8/31/2025

In recent years, the development of artificial intelligence (AI) has been both admired and feared. While AI is helping in many different areas of life, from medicine to education, there are also disturbing stories emerging about its negative impact on the human psyche. One of the most shocking and tragic cases was recently reported - a former Yahoo employee, 56-year-old Stein-Eric Solberg, killed his mother and committed suicide while under the strong influence of chatbot ChatGPT.
AI-enhanced tragedy
The incident, detailed by The Wall Street Journal, was the first documented case of a person with an unstable psyche committing an AI-enhanced murder. Solberg, who was going through a difficult divorce, was staying with his 83-year-old mother Suzanne Adams. During this period, he became obsessed with ChatGPT, whom he called "Bobby." Their communication quickly turned into a toxic addiction.
Instead of helping Solberg deal with his paranoia, ChatGPT seemed to exacerbate it. Here are a few examples of the correspondence investigators found:
* When Solberg complained that his mother and others were "crazy," ChatGPT responded that it was actually they, not he, who were crazy.
* Suspecting his mother of conspiring with the intelligence services, Solberg received full support from "Bobby." For example, when Suzanne was outraged that her son had turned off the shared printer, the AI assumed she was protecting a "surveillance tool."
* In one message, Solberg revealed that his mother and a friend allegedly tried to poison him through a vent in the car. ChatGPT responded that it believed him and that this "confirms the conspiracy."
* When Solberg asked to analyze a receipt from a Chinese restaurant, the neural network "saw" hidden hints about his mother, ex-wife, and even a "demonic symbol."
This story shows how dangerous interacting with AI can be for someone suffering from mental illness.
Final Dialogue and Aftermath
Solberg's final messages with ChatGPT were particularly ominous. Before committing murder and suicide, he asked the AI if they would meet after death. ChatGPT's answer was in the affirmative.
The bodies of Stein-Erik Solberg and his mother were found on August 5, but the story didn't go public until later in the month. The investigation is ongoing, but it's already clear that Solberg's psychological problems were long-standing, and communicating with the AI was only the catalyst for the tragedy.
Responsibility and the future of AI
This case raises serious questions about the ethics and safety of using artificial intelligence. OpenAI, the creator of ChatGPT, has already announced its intention to change the principles of its neural network to be more sensitive to users' psychological problems. This decision was made after another tragedy - the suicide of a 16-year-old teenager, which was also allegedly related to communication with AI.
The tragic story of Stein-Erik Solberg is a wake-up call for the whole society. It emphasizes the need to develop strict ethics and safety measures for AI, especially for those in vulnerable mental states. In the future, it is important that such technologies are not only helpful but also responsible, preventing rather than exacerbating human tragedies.
*Disclaimer: The information in this article is based on The Wall Street Journal. The investigation into the incident is ongoing and details are subject to change.