ChatGPT won’t help you break up anymore as OpenAI tweaks rules

Share This Post


Artificial intellgence (AI) leader OpenAI has rolled out a series of updates to ChatGPT, with a major tweak being made to how the popular chatbot respond to users asking advice on personal problems.

This comes after instances where it was reported that the tool was fuelling delusions and psychosis in users. For personal problems discussed with the chatbot, the company said the tool should now help you think things through — weighing pros and cons, and not give you an answer.

The company added these updates are based on feedback and aim to improve “real-world usefulness over the long term, not just whether you liked the answer in the moment”.

ChatGPT to give no break up advice, help you think things through…

OpenAI in a blogpost ackowledged that “AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress”.

Adding that the changes would allow AI to “be there when you’re struggling, help you stay in control of your time, and guiding—not deciding—when you face personal challenges”.

“Helping you solve personal challenges. When you ask something like “Should I break up with my boyfriend?” ChatGPT shouldn’t give you an answer,” a statement from the company said, adding: “It should help you think it through—asking questions, weighing pros and cons. New behavior for high-stakes personal decisions is rolling out soon.”

ChatGPT trained to better detect mental, emotional distress, says OpenAI

Besides the personal responses, OpenAI has also tweaked how ChatGPT responds to those struggling and “respond with grounded honesty”.

On the issue of use for support when struggling, OpenAI acknowledged instances where ChatGPT’s “4o model fell short in recognising signs of delusion or emotional dependency”

“We’re continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed,” it added.

Further, the chatbot will now also give users gentle reminders and encourage breaks during long sessions. The company added, “We’ll keep tuning when and how they show up so they feel natural and helpful.”

Learning from experts: ‘Evolving from real-world use’

OpenAI added that it is “working closely” with experts to improve how ChatGPT responds in critical moments where users show signs of mental or emotional distress.

  • Medical expertise: OpenAI claims to have worked with over 90 physicians across over 30 countries, including psychiatrists, pediatricians, and general practitioners, to build custom rubrics for evaluating complex and multi-turn conversations.
  • Research collaboration: The company said it is engaging human-computer-interaction (HCI) researchers and clinicians for feedback on identifying concerning behaviours, refine evaluation methods, and stress-test product safeguards.
  • Advisory group: It is also convening an advisory group of experts in mental health, youth development, and HCI to “ensure our approach reflects the latest research and best practices”.

OpenAI said that the work is ongoing, and its approach “will keep evolving as we learn from real-world use”.



Source link

Related Posts

- Advertisement -spot_img