It's not you, it's me. ChatGPT doesn't want to be your therapist or friend

Date: Category:health Views:1 Comment:0


In a case of "it's not you, it's me," the creators of ChatGPT no longer want the chatbot to play the role of therapist or trusted confidant.

OpenAI, the company behind the popular bot, announced that it had incorporated some “changes,” specifically mental health-focused guardrails designed to prevent users from becoming too reliant on the technology, with a focus on people who view ChatGPT as a therapist or friend.

The changes come months after reports detailing negative and particularly worrisome user experiences raised concerns about the model’s tendency to “validate doubts, fuel anger, urge impulsive actions, or reinforce negative emotions [and thoughts].”

The company confirmed in its most recent blog post that an update made earlier this year made ChatGPT “noticeably more sycophantic,” or “too agreeable,” “sometimes saying what sounded nice instead of what was helpful.”

The logo of DeepSeek, a Chinese artificial intelligence company that develops open-source large language models, and the logo of OpenAI's artificial intelligence chatbot ChatGPT on January 29, 2025.
The logo of DeepSeek, a Chinese artificial intelligence company that develops open-source large language models, and the logo of OpenAI's artificial intelligence chatbot ChatGPT on January 29, 2025.

OpenAI announced they have “rolled back” certain initiatives, including changes in how they use feedback and their approach to measuring “real-world usefulness over the long term, not just whether you liked the answer in the moment.”

“There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency,” OpenAI wrote in an Aug. 4 announcement. “While rare, we’re continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.”

Here’s what to know about the recent changes to ChatGPT, including what these mental health guardrails mean for users.

ChatGPT integrates ‘changes’ to help users thrive

According to OpenAI, the “changes” were designed to help ChatGPT users “thrive.”

“We also know that AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress,” OpenAI said. “To us, helping you thrive means being there when you’re struggling, helping you stay in control of your time, and guiding—not deciding—when you face personal challenges.”

The company said its “working closely” with experts, including physicians, human-computer-interaction (HCI) researchers and clinicians as well as an advisory group, to improve how “ChatGPT responds in critical moments—for example, when someone shows signs of mental or emotional distress.”

The ChatGPT website is seen on a computer at the Columbus Metropolitan Library in Columbus, Ohio.
The ChatGPT website is seen on a computer at the Columbus Metropolitan Library in Columbus, Ohio.

Thanks to recent “optimization,” ChatGPT is now able to:

  • Engage in productive dialogue and provide evidence-based resources when users are showing signs of mental/emotional distress

  • Prompt users to take breaks from lengthy conversations

  • Avoid giving advice on “high-stakes personal decisions,” instead ask questions/weigh pros and cons to help users come up with a solution on their own

“Our goal to help you thrive won’t change. Our approach will keep evolving as we learn from real-world use,” OpenAI said in its blog post. “We hold ourselves to one test: if someone we love turned to ChatGPT for support, would we feel reassured? Getting to an unequivocal ‘yes’ is our work.”

This article originally appeared on USA TODAY: ChatGPT adds mental health protections for users: See what they are

Comments

I want to comment

◎Welcome to participate in the discussion, please express your views and exchange your opinions here.