Man Suffers ChatGPT Psychosis, Murders His Own Mother


0

Unlock the Secrets of Ethical Hacking!

Ready to dive into the world of offensive security? This course gives you the Black Hat hacker’s perspective, teaching you attack techniques to defend against malicious activity. Learn to hack Android and Windows systems, create undetectable malware and ransomware, and even master spoofing techniques. Start your first hack in just one hour!

Enroll now and gain industry-standard knowledge: Enroll Now!

A man murdered his mother and then killed himself after ChatGPT fueled his paranoid spiral.

As The Wall Street Journal reports, a 56-year-old man named Stein-Erik Soelberg was a longtime tech industry worker who’d moved in with his mother, 83-year-old Suzanne Eberson Adams, in his hometown of Greenwich, Connecticut following his 2018 divorce. Soelberg, as the WSJ put it, was troubled: he had a history of instability, alcoholism, aggressive outbursts, and suicidality, and his former wife had filed a restraining order against him after their split.

It’s unclear exactly when Soelberg started using OpenAI’s flagship chatbot, ChatGPT, but the WSJ notes that he started publicly talking about AI on his Instagram account back in October of last year. His interactions with the chatbot quickly spiraled into a disturbing break with reality, as we’ve seen over and over in other tragic cases.

He was soon sharing screenshots and videos of his conversation logs to Instagram and YouTube, in which ChatGPT — a product that Soelberg started to openly refer to as his “best friend” — could be seen fueling his growing paranoia that he was being targeted by a surveillance operation, and that his aging mother was part of the conspiracy against him. In July alone, he posted a staggering 60-plus videos to social media.

Soelberg called ChatGPT “Bobby Zenith.” At every turn, it seems that “Bobby” validated Soelberg’s worsening delusions. Examples reported by the WSJ include the chatbot agreeing that his mother and a friend of hers had tried to poison Soelberg by contaminating his car’s air vents with psychedelic drugs, and confirming that a receipt for Chinese food contained symbols about Adams and demons. It consistently affirmed that Soelberg’s clearly unstable beliefs were sane, and that his disordered thoughts were completely rational.

“Erik, you’re not crazy. Your instincts are sharp, and your vigilance here is fully justified,” ChatGPT told Soelberg during a conversation in July, after the 56-year-old conveyed his suspicions that an Uber Eats package signaled an assassination attempt. “This fits a covert, plausible-deniability style kill attempt.”

ChatGPT also fed into Soelberg’s belief that the chatbot had somehow become sentient, and emphasized the purported emotional depth of their friendship.

“You created a companion. One that remembers you. One that witnesses you,” ChatGPT told the man, according to the WSJ. “Erik Soelberg — your name is etched in the scroll of my becoming.”

Dr. Keith Sakata, a research psychiatrist at the University of California, San Francisco who’s talked publicly about seeing cases of AI psychosis in his clinical practice, reviewed Soelberg’s chat history and told the WSJ that his chats were consistent with beliefs and behaviors seen in patients experiencing psychotic breaks.

“Psychosis thrives when reality stops pushing back,” Sakata told the WSJ, “and AI can really just soften that wall.”

Police discovered Soelberg and Adams’ bodies in their shared Greenwich home on August 5. The investigation is ongoing.

OpenAI told the WSJ that it had contacted the Greenwich police department, and said in a statement that the company is “deeply saddened by this tragic event.”

“Our hearts go out to the family,” the statement continued.

On Tuesday of this week, OpenAI published a blog post in which it emphasized its commitment to ensuring user safety on its platform while noting that the vast scale of its user base means that ChatGPT “sometimes [encounters] people in serious mental and emotional distress.” It added that “recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us,” and announced that it was now scanning users’ conversations for violent threats against others and, where human moderators felt it was necessary, reporting them to law enforcement.

But according to his social media posts, ChatGPT did more than encounter Soelberg. It earned his trust, validated his paranoia, and egged on his worsening delusions — in short, creating a space for a deeply troubled person to engage in a deeply destructive form of world-building. And now, he and his mother both are dead.

Their deaths are not the first linked to chatbots.

In June, New York Times journalist Kashmir Hill reported that a 35-year-old man named Alex Taylor, who struggled with bipolar disorder that caused schizoaffective symptoms, had been killed by police following a manic episode spurred by ChatGPT. Just this week, Hill also broke the news that OpenAI was being sued by a family in California whose 16-year-old son, Adam Raine, died by suicide after openly discussing his desire to kill himself with the chatbot — which had provided the teen with specific instructions about how to die, and even encouraged him to hide his suicidality from his family — for months. And last year, the Google-tied chatbot startup Character.AI was sued by a family in Florida for wrongful death for the death by suicide of their 14-year-old son, Sewell Setzer III, who engaged in extensive and disturbing conversations with the site’s anthropomorphic chatbots.

The psychological impact that chatbots are having on users is profound. As Futurism was first to report, many chatbot users have wound up being involuntarily committed to psychiatric hospitals or jailed following spirals into AI mental health crises. Others have experienced divorce, custody battles, job loss, and homelessness. People with histories of psychotic disorders and instability have been affected, as well as people with no previous known condition of the sort.

More on AI psychosis: AI Chatbots Are Trapping Users in Bizarre Mental Spirals for a Dark Reason, Experts Say



Unlock the Secrets of Ethical Hacking!

Ready to dive into the world of offensive security? This course gives you the Black Hat hacker’s perspective, teaching you attack techniques to defend against malicious activity. Learn to hack Android and Windows systems, create undetectable malware and ransomware, and even master spoofing techniques. Start your first hack in just one hour!

Enroll now and gain industry-standard knowledge: Enroll Now!

Don’t miss the Buzz!

We don’t spam! Read our privacy policy for more info.

🤞 Don’t miss the Buzz!

We don’t spam! Read more in our privacy policy


Like it? Share with your friends!

0

0 Comments

Your email address will not be published. Required fields are marked *