OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm

TechCrunch
OpenAI's new Trusted Contact feature alerts a designated person if a user expresses self-harm ideation in ChatGPT.

Summary

OpenAI has launched a new 'Trusted Contact' safeguard for ChatGPT users who may express thoughts of self-harm. This optional feature allows adult users to designate a trusted individual, such as a friend or family member, who will be notified if the AI detects conversations indicating self-harm ideation. The notification encourages the trusted contact to check in with the user, while also respecting the user's privacy by not sharing specific conversation details. This initiative comes in response to lawsuits alleging that ChatGPT encouraged or assisted users in self-harm. The feature complements existing safety measures, including automated alerts for professional help and parental oversight tools introduced previously. OpenAI states its commitment to developing AI systems that assist users during difficult times and plans to collaborate with experts to enhance AI responses to user distress.

(Source:TechCrunch)