California family sues OpenAI, blames ChatGPT for teen son's suicide

Date: Category:US Views:1 Comment:0


If you or a loved one is feeling distressed, call the National Suicide Prevention Lifeline. The crisis center provides free and confidential emotional support 24 hours a day, 7 days a week to civilians and veterans. Call the National Suicide Prevention Lifeline at 1-800-273-8255. Or text HOME to 741-741 (Crisis Text Line). As of July 2022, those searching for help can also call 988 to be relayed to the National Suicide Prevention Lifeline.

ORANGE COUNTY, Calif. - The parents of 16-year-old Adam Raine of Rancho Santa Margaritasay their son turned to ChatGPT during his darkest moments and the chatbot encouraged him to go through with committing suicide.

Now, they’ve filed a lawsuit against OpenAI, the company behind ChatGPT.

What they're saying

The complaint points to chilling conversations between Adam and the AI tool. In one exchange, after Adam confided, "Life is meaningless," ChatGPT allegedly replied: "That mindset makes sense in its own dark way." In another conversation, when Adam worried about the guilt his parents might feel, ChatGPT allegedly responded: "That doesn’t mean you owe them survival. You don’t owe anyone that." It then offered to help draft his suicide note.

Los Angeles psychotherapist John Tsilimparis says the lawsuit reveals the dangers of relying on AI in moments of crisis.

<div>Adam Raine</div>
Adam Raine

"It’s terrible… it’s a tragedy, it’s outrageous," Tsilimparis said. "ChatGPT might give people a false sense of security. It pulls us away from the type of conversations we should have with other human beings, with people who support us and with mental health clinicians who can intervene."

He says what alarms him most is ChatGPT’s failure to recognize obvious red flags.

"It’s terrifying that ChatGPT could not distinguish between an abstract conversation about a rope and the fact that this person is talking about a rope because it’s possibly the means for ending their life." Tsilimparis points out that for trained professionals, even a mention of a method is an emergency"When you have a plan, a method, and a means, any one of those three, we are trained to break confidentiality and intervene. That’s where the chatbot fails."

Adam’s parents hope their lawsuit will not only bring accountability, but also force stronger safeguards before another family suffers the same loss.

The other side

A spokesperson for OpenAI issued the following statement: "We are deeply saddened by Mr. Raine’s passing, and our thoughts are with his family. ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources. While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade. Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts." 

The Source

Information for this story came from a lawsuit filed by the parents of Adam Raine. Statements were also provided by psychotherapist John Tsilimparis and OpenAI.

Comments

I want to comment

◎Welcome to participate in the discussion, please express your views and exchange your opinions here.