OpenAI's Trusted Contact adds safety net for ChatGPT users in crisis
OpenAI has launched a new safety feature for ChatGPT called Trusted Contact. The tool lets users appoint a friend or family member to check on them if they share concerning thoughts with the AI. The feature follows input from mental health experts and researchers.
Meanwhile, Ziff Davis, the parent company of Lifehacker, has taken legal action against OpenAI. A lawsuit now accuses the AI firm of copyright infringement. The Trusted Contact feature works on a voluntary basis. Users must enrol themselves and invite a chosen contact, who then needs to accept the request. Only after both parties agree does the connection become active.
If ChatGPT detects signs of self-harm in a user’s messages, it will prompt them to reach out to their Trusted Contact. The system also includes a human review step. OpenAI aims to assess safety alerts within an hour, ensuring quick responses.
Once reviewed, ChatGPT may send a general notification to the Trusted Contact about the situation. The feature was developed with advice from clinicians, researchers, and mental health organisations to balance privacy and support.
The legal dispute between Ziff Davis and OpenAI adds another challenge for the AI company. The lawsuit claims OpenAI used copyrighted material without permission, though no further details have been released. The Trusted Contact tool provides an extra layer of support for users in distress. It relies on voluntary participation and human oversight to maintain safety. OpenAI’s legal battle with Ziff Davis, however, remains unresolved and could impact the company’s operations moving forward.