An update on our mental health-related work

OpenAI
OpenAI is enhancing ChatGPT safety features, introducing a trusted contact option, and addressing ongoing mental health-related litigation.

Summary

OpenAI provided an update on its ongoing safety work related to mental health, noting that over 900 million people use ChatGPT weekly. Building on existing parental controls introduced in September 2025, the company plans to soon introduce a trusted contact feature allowing adult users to designate someone to receive support notifications. OpenAI is also advancing model detection of emotional distress through new evaluation methods simulating extended mental health conversations. Separately, the company acknowledged that a court consolidated several mental health-related cases involving ChatGPT into a single proceeding in California, with new cases expected. OpenAI commits to handling these cases with care, transparency, and respect for the individuals involved, emphasizing the need to let facts emerge through the complex legal process. Independent of litigation, the company remains focused on improving ChatGPT's training to recognize distress, de-escalate sensitive conversations, and guide users toward real-world support, collaborating with mental health experts.

(Source:OpenAI)