AI Digest
← Back to all articles
⬛OpenAI
Product¡OpenAI¡1 min read

ChatGPT Adds Emergency Contact Feature to Detect Self-Harm Concerns

New Safety Feature Launches

OpenAI has introduced Trusted Contact, an optional safety feature built into ChatGPT that can alert a designated person if the system detects serious self-harm concerns during conversations. The feature represents a proactive approach to user safety, allowing ChatGPT to serve as an early warning system for individuals in crisis. Users can opt into the feature and designate someone they trust to receive notifications when concerning patterns are identified.

How Trusted Contact Works

When enabled, the feature monitors conversations for indicators of serious self-harm risk and automatically notifies the user's designated trusted contact if such concerns are detected. The system is designed to respect user privacy while providing a safety net for vulnerable moments. OpenAI has not disclosed the specific detection mechanisms but emphasizes the feature is entirely optional and user-controlled.

Expanding AI Safety Measures

This launch reflects growing recognition among AI companies of their responsibility to protect user wellbeing beyond traditional content moderation. The feature joins other mental health resources OpenAI has integrated into ChatGPT, including crisis helpline information and supportive responses. It marks a significant step in using AI not just as a conversational tool but as a potential intervention mechanism for users in distress.

Frequently Asked Questions

Is the Trusted Contact feature mandatory?▾

No, Trusted Contact is completely optional. Users must actively choose to enable the feature and designate someone to receive notifications.

Who can see my conversations with ChatGPT?▾

Your trusted contact only receives a notification if serious self-harm concerns are detected—they don't have access to your full conversation history. The feature is designed to balance privacy with safety.

What happens when the feature detects a concern?▾

If serious self-harm indicators are detected, ChatGPT will notify your designated trusted contact so they can reach out and provide support. The system aims to connect you with help during vulnerable moments.

Read original post →