05/07/2026 | News release | Distributed by Public on 05/07/2026 12:10
May 7, 2026
SafetyConnecting you to someone you trust when it matters most.
People use ChatGPT to learn, explore ideas, solve problems, and reflect on personal questions. Sometimes those conversations can involve moments when someone may be struggling or looking for support. Our goal is to design systems that respond thoughtfully to sensitive conversations and encourage people to connect with real-world help when needed.
Today we are starting to roll out Trusted Contact, an optional safety feature in ChatGPT that allows adults to nominate someone they trust, such as a friend, family member, or caregiver, who may be notified if our automated systems and trained reviewers detect the enrolled person may have discussed harming themselves in a way that indicates a serious safety concern. Trusted Contact is designed to offer another layer of support alongside the localized helplines already available in ChatGPT, by helping users connect to a person they trust when they are in crisis.
Trusted Contact builds on parental controls safety notifications (opens in a new window), which allow parents or guardians to receive alerts when there are signs of acute distress for a linked teen account. Now, we are extending our safety alert options so anyone over 18 can choose to add someone they trust as their Trusted Contact.
Expert guidance (opens in a new window) identifies social connection as one of the most important protective factors to reduce suicide risk. Trusted Contact (opens in a new window) is designed to encourage connection with someone the user already trusts. It does not replace professional care or crisis services, and is one of several layers of safeguards to support people in distress. ChatGPT will still encourage users to contact crisis hotlines or emergency services when appropriate.
While these serious safety situations are rare, when they do arise, our systems are designed to support timely review and response. While no system is perfect, and a notification to a Trusted Contact may not always reflect exactly what someone is experiencing, every notification undergoes trained human review before it is sent, and we strive to review these safety notifications in under one hour.
We developed Trusted Contact with guidance from clinicians, researchers, and organizations that specialize in mental health and suicide prevention. This work is informed by our Global Physicians Network , a network of more than 260 licensed physicians across 60 countries and our Expert Council on Well-Being and AI . We also worked closely with external organizations including the American Psychological Association. (opens in a new window)
In addition to Trusted Contact, ChatGPT has safeguards to help guide sensitive conversations at every stage. We have continued improving how the system responds to different levels of risk expressed in a conversation:
Trusted Contact is part of OpenAI's broader effort to build AI systems that help people during difficult moments. We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress. Our goal is to ensure that AI systems do not exist in isolation. Instead they should help connect people to the real-world care, relationships, and resources that matter most.