Back to home

OpenAI Adds Trusted Contact Feature To Detect Self-Harm Risks

OpenAI has launched a new safety feature dubbed "Trusted Contact," designed to provide a layer of human intervention when users express thoughts of self-harm while interacting with AI. The tool allows users to designate a specific person to be notified if the chatbot detects language indicating a mental health crisis. This move represents a significant shift in how AI platforms manage high-risk interactions by bridging the gap between digital assistance and real-world support systems.

The feature works by alerting the pre-selected contact with resources and information if the AI monitors a pattern of concerning behavior. While OpenAI has long provided automated links to helplines and crisis centers, this update acknowledges that immediate personal intervention can often be more effective during a crisis. It aims to prevent users from falling into isolation while using technology as a primary conversational outlet.

This development follows growing scrutiny over the psychological impact of generative AI and the responsibility companies have toward their users' well-being. By integrating a "Trusted Contact," OpenAI is navigating a complex ethical landscape regarding user privacy versus safety. Critics and advocates alike will be watching to see how the company balances the sensitivity of these alerts with the necessity of intervention.

The rollout of this safeguard highlights the evolving nature of AI safety beyond filtering bias or misinformation. As people form deeper habits around using chatbots for emotional processing, the industry is increasingly pressured to implement guardrails that address human vulnerability. TechCrunch reported on this new safety initiative.