OpenAI Enlists Global Experts to Make ChatGPT a Safer Conversationalist for Those in Distress
SAN FRANCISCO – In a significant step to address the complex role of AI in society, OpenAI has assembled a worldwide network of more than 170 mental health professionals to guide its ChatGPT model in responding more safely and empathetically to users showing signs of psychological distress.
The initiative, which includes psychologists, psychiatrists, and primary care physicians from over 60 countries, was launched to refine the AI’s handling of sensitive topics like mania, psychosis, and suicidal thoughts. According to the company, the expert guidance has already led to a dramatic 65–80% reduction in responses that violate OpenAI’s safety protocols for such conversations.
The move comes in response to internal data revealing the scale of the need. Out of ChatGPT’s estimated 800 million weekly users, approximately 0.07%—representing hundreds of thousands of people—may display signs of mental health struggles. A further 0.15% of conversations may indicate suicidal intent, highlighting the critical need for the AI to act as a responsible intermediary.
“These figures aren’t just statistics; they represent individuals in moments of vulnerability,” an OpenAI spokesperson stated. “Our goal is to ensure our technology avoids causing harm and, where possible, guides people toward appropriate help.”
The collaboration has resulted in updated response guidelines for the model. ChatGPT is now specifically directed to support users’ real-world relationships with professionals and loved ones, rather than positioning itself as a substitute. It has also been trained to avoid affirming ungrounded or delusional beliefs and to pay closer attention to subtle, indirect signs of self-harm or severe mental crisis.
The global advisory team has been instrumental in refining ChatGPT’s approach to encourage users to seek professional help while simultaneously enhancing the AI’s internal systems for detecting signals of distress.
Looking ahead, OpenAI plans to expand its safety evaluation metrics for future models to include new categories, such as a user’s emotional over-reliance on the AI and how to handle non-suicidal mental health emergencies, ensuring the technology continues to develop with user well-being as a core priority.

