ChatGPT's New Safety Tool Lets Users Add a 'Trusted Contact' for Crisis Support
OpenAI has introduced a new safety feature for ChatGPT users. The tool lets adults designate a 'Trusted Contact' to receive alerts if the system detects signs of self-harm. The aim is to bridge digital interactions with real-world support during moments of crisis.
The feature was shaped with input from mental health experts and OpenAI’s advisory networks. It combines automated checks with human oversight to spot concerning language—even when users don’t explicitly ask for help. Users aged 18 or older (19 in South Korea) can now pick one adult as their Trusted Contact through ChatGPT’s settings. If the system flags potentially harmful content, it first notifies the user. It then offers conversation prompts to encourage reaching out to their chosen contact.
In more urgent cases, ChatGPT may directly alert the Trusted Contact. The notification confirms that self-harm was discussed but does not share chat details, protecting user privacy. OpenAI stresses that the goal is to prompt real-life connections rather than leave users isolated online.
Experts have weighed in on the feature’s potential. Dr. Arthur Evans, CEO of the American Psychological Association, highlights that social bonds act as a key defence during emotional distress. Meanwhile, Dr. Munmun De Choudhury, a professor at Georgia Tech, notes that AI can help create psychological safety by guiding users toward human support. The Trusted Contact system adds a layer of protection for vulnerable users. It relies on both technology and human intervention to detect risks and encourage offline help. OpenAI continues to refine the tool with feedback from medical professionals and well-being specialists.