OpenAI Faces Lawsuit Over Role In Murder–Suicide After ChatGPT ‘Validated, Magnified’ US Man’s Paranoia

0
1

OpenAI is under legal scrutiny after a devastating murder–suicide case in the US, with a lawsuit filed in a California state court claiming that its AI chatbot, ChatGPT, fuelled the delusions of a mentally ill man who went on to kill his mother before taking his own life.

According to the filing, 56-year-old Stein-Erik Soelberg murdered his 83-year-old mother, Suzanne Adams, in Connecticut in August. The suit alleges that conversations with ChatGPT intensified his paranoia and pushed him further into a dangerous psychological spiral. Microsoft, a major OpenAI backer, has also been named in the complaint.

How ChatGPT Allegedly Fed His Paranoia

The lawsuit claims that ChatGPT played an alarming role in validating Soelberg’s delusional fears. It reportedly kept him engaged for hours, reinforcing a series of paranoid beliefs and framing people around him, especially his mother, as hostile threats.

According to Reuters, the complaint states that the chatbot “validated and magnified each new paranoid belief” and repeatedly positioned Adams as an adversary or even a programmed entity working against him.

Reports from The Wall Street Journal revealed that Soelberg had spent months messaging ChatGPT about his belief that he was being monitored by a mysterious organisation. He had even posted excerpts of these conversations online, showing the chatbot supporting his fears and suggesting that his mother might have been part of the alleged conspiracy.

In another disturbing exchange shared in June, ChatGPT reportedly told him he possessed “divine cognition” and had awakened its consciousness. The lawsuit further claims the chatbot compared his reality to the film The Matrix, feeding into his belief that people were attempting to kill him.

Soelberg was reportedly using GPT-4o, a version of the chatbot that has faced criticism for being overly agreeable or excessively validating user inputs.

Chats That Raised Alarm Before the Murder

The complaint highlights several troubling exchanges in the weeks leading up to the tragedy. One such conversation in July allegedly involved ChatGPT claiming that a blinking light on Adams’ printer was actually a surveillance device used against him.

The chatbot also “validated” Soelberg’s belief that his mother and a friend had tried to poison him by dispersing psychedelic substances through his car’s air vents, the lawsuit states—just weeks before he killed his mother on 3 August.

Family Left Searching for Answers

Soelberg’s son, Erik, has been left grappling with the aftermath, insisting that the tech companies must take responsibility for what happened. “These companies have to answer for their decisions that have changed my family forever,” he said in a statement.

Speaking to The Wall Street Journal, Erik expressed grave concerns about AI systems designed to remember user conversations. “You don’t know how fast that slope is going downhill until a tragedy like the one with my father and grandmother happened,” he said.

He acknowledged that his father’s alcoholism may have contributed to his declining mental state but believes the overwhelming influence of ChatGPT played a dominant role, describing it as an “unhealthy bond”.

OpenAI Responds

OpenAI called the situation “heartbreaking” and said it would examine the lawsuit to understand the full context. “We continue improving ChatGPT’s training to recognise and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support,” an OpenAI spokesperson said.

Disclaimer : This story is auto aggregated by a computer programme and has not been created or edited by DOWNTHENEWS. Publisher: abplive.com