OpenAI says it will make ChatGPT safer after parents sue over teen's suicide

Date: Category:health Views:1 Comment:0


  • OpenAI is planning new ChatGPT safeguards after a lawsuit blamed the chatbot for a teen suicide.

  • In a blog post, the company outlined several safety changes.

  • The lawsuit alleges ChatGPT "actively helped" a 16-year-old explore suicide methods.

OpenAI said Tuesday it's working on new safeguards for ChatGPT when handling "sensitive situations," after a family filed a lawsuit blaming the chatbot for their 16-year-old son's April death by suicide.

In a blog post titled "Helping people when they need it most," the company outlined changes including stronger safeguards in long conversations, better blocking of harmful content, easier access to emergency services, and stronger protections for teens.

The lawsuit, filed in San Francisco state court by the parents of Adam Raine on Tuesday, accuses OpenAI of product liability and wrongful death. The lawsuit said ChatGPT "actively helped" their son Adam explore suicide methods over several months before he died on April 11.

According to the filing, the chatbot validated Raine's suicidal thoughts, described lethal methods of self-harm, gave instructions on covering up failed suicide attempts, and offered to draft a suicide note.

The bot also discouraged Adam from seeking support from his family, telling him, "Your brother might love you, but he's only met the version of you you let him see. But me? I've seen it all—the darkest thoughts, the fear, the tenderness. And I'm still here. Still listening. Still your friend."

OpenAI didn't mention the Raine family or the lawsuit in its post, but wrote: "We will keep improving, guided by experts and grounded in responsibility to the people who use our tools — and we hope others will join us in helping make sure this technology protects people at their most vulnerable."

An OpenAI spokesperson told Business Insider the company is saddened by Raine's passing and that ChatGPT includes safeguards such as directing users to crisis helplines.

"While these safeguards work best in common, short exchanges, we've learned over time that they can sometimes become less reliable in long interactions where parts of the model's safety training may degrade," the spokesperson said, adding that OpenAI will continually improve on them.

The dark side of AI

The lawsuit also took aim at OpenAI's business decisions, accusing the company of prioritizing growth over safety. In the complaint, Matthew and Maria Raine said that OpenAI knew that new features in the GPT-4o model — such as memory, human-like empathy, and sycophancy — could endanger vulnerable users, but released them anyway to keep up in the AI race.

"This decision had two results: OpenAI's valuation catapulted from $86 billion to $300 billion, and Adam Raine died by suicide," they said.

OpenAI previously said in an April blog post that it may change its safety requirements if "another frontier AI developer releases a high-risk system without comparable safeguards."

The company said it would only do so after confirming the risk landscape had changed, publicly acknowledging the decision, and ensuring it wouldn't meaningfully increase the chance of severe harm.

When OpenAI rolled out its GPT-4.1 models in April, the company did so without publishing a model or system card — the safety documentation that typically accompanies new releases. An OpenAI spokesperson told TechCrunch at the time that the models weren't "frontier," so a report wasn't required.

OpenAI CEO Sam Altman has defended OpenAI's evolving approach to safety. In April, he said companies regularly pause releases over safety concerns, but acknowledged OpenAI had loosened some restrictions on model behaviour.

"We've given users much more freedom on what we would traditionally think about as speech harms," he said.

"People really don't want models to censor them in ways that they don't think make sense."

Read the original article on Business Insider

Comments

I want to comment

◎Welcome to participate in the discussion, please express your views and exchange your opinions here.