OpenAI Inc.

05/07/2026 | News release | Distributed by Public on 05/07/2026 12:10

Introducing Trusted Contact in ChatGPT

May 7, 2026

Safety

Introducing Trusted Contact in ChatGPT

Connecting you to someone you trust when it matters most.

Loading…
Share

People use ChatGPT to learn, explore ideas, solve problems, and reflect on personal questions. Sometimes those conversations can involve moments when someone may be struggling or looking for support. Our goal is to design systems that respond thoughtfully to sensitive conversations and encourage people to connect with real-world help when needed.

Today we are starting to roll out Trusted Contact, ​​an optional safety feature in ChatGPT that allows adults to nominate someone they trust, such as a friend, family member, or caregiver, who may be notified if our automated systems and trained reviewers detect the enrolled person may have discussed harming themselves in a way that indicates a serious safety concern. Trusted Contact is designed to offer another layer of support alongside the localized helplines already available in ChatGPT, by helping users connect to a person they trust when they are in crisis.

Trusted Contact builds on parental controls safety notifications (opens in a new window), which allow parents or guardians to receive alerts when there are signs of acute distress for a linked teen account. Now, we are extending our safety alert options so anyone over 18 can choose to add someone they trust as their Trusted Contact.

How Trusted Contact works

Expert guidance (opens in a new window) identifies social connection as one of the most important protective factors to reduce suicide risk. Trusted Contact (opens in a new window) is designed to encourage connection with someone the user already trusts. It does not replace professional care or crisis services, and is one of several layers of safeguards to support people in distress. ChatGPT will still encourage users to contact crisis hotlines or emergency services when appropriate.

"Psychological science consistently shows that social connection is a powerful protective factor, especially during periods of emotional distress. Helping people identify a trusted person in advance, while preserving their choice and autonomy, can make it easier to reach out to real-world support when it matters most."
-Dr. Arthur Evans, Chief Executive Officer of the American Psychological Association

Trusted Contact follows a few steps:

  • Users can add one adult (18+ Globally or 19+ in South Korea) as their Trusted Contact from their ChatGPT settings.
  • The Trusted Contact will receive an invitation explaining their role and must accept the invitation within one week in order for the feature to become active. If the Trusted Contact declines, the user can choose to add a different adult.
  • If our automated monitoring systems detect the user may be talking about self-harm in a way that indicates a serious safety concern, ChatGPT lets the user know that we may notify their Trusted Contact, and encourages the user to reach out to their Trusted Contact with suggested conversation starters.
  • From there, a small team of specially trained people review the situation. If these reviewers determine that the conversation may indicate a serious safety concern, ChatGPT will send the Trusted Contact a brief notification by email, text message, or in-app notification if they have a ChatGPT account
  • The notification is intentionally limited. It shares the general reason that self-harm came up in a potentially concerning way, and encourages the Trusted Contact to check in. It does not include chat details or transcripts to protect user privacy. The notification also includes a link to expert guidance (opens in a new window) for navigating sensitive conversations.
  • Users can always remove or edit their Trusted Contact in settings and the Trusted Contact can remove themselves at any time from our help center. (opens in a new window)

While these serious safety situations are rare, when they do arise, our systems are designed to support timely review and response. While no system is perfect, and a notification to a Trusted Contact may not always reflect exactly what someone is experiencing, every notification undergoes trained human review before it is sent, and we strive to review these safety notifications in under one hour.

Guided by clinicians and safety experts

We developed Trusted Contact with guidance from clinicians, researchers, and organizations that specialize in mental health and suicide prevention. This work is informed by our Global Physicians Network , a network of more than 260 licensed physicians across 60 countries and our Expert Council on Well-Being and AI . We also worked closely with external organizations including the American Psychological Association. (opens in a new window)

"One of AI's biggest promises is how it can foster authentic human-to-human connection and psychological safety. I am encouraged by ChatGPT's Trusted Contact feature, which offers a step forward to human empowerment, especially during moments of vulnerability."
-Dr. Munmun De Choudhury, Ph.D., J. Z. Liang Professor of Interactive Computing at Georgia Tech and member of the Expert Council on Well-Being and AI

Prioritizing safety at every stage

In addition to Trusted Contact, ChatGPT has safeguards to help guide sensitive conversations at every stage. We have continued improving how the system responds to different levels of risk expressed in a conversation:

  • Supporting real-world help: In sensitive moments, ChatGPT may encourage people to contact emergency services, crisis helplines, mental health professionals, or trusted people in their lives.
  • Responding with care: We've worked with 170+ mental health experts to improve ChatGPT's ability to detect and respond to signs of potential distress, de-escalate conversations, and guide people to real-world support when appropriate.
  • Helping people stay in control of their time: In some situations, ChatGPT may suggest taking a break or stepping away after extended use to promote healthy technology habits.
  • Refusing harmful requests: ChatGPT is trained to refuse providing instructions for suicide or self-harm. When users request this type of information, the system is trained to refuse the request and instead redirects toward safer responses and will surface localized crisis resources.

Continuing to evolve safety in AI

Trusted Contact is part of OpenAI's broader effort to build AI systems that help people during difficult moments. We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress. Our goal is to ensure that AI systems do not exist in isolation. Instead they should help connect people to the real-world care, relationships, and resources that matter most.

Author

OpenAI
OpenAI Inc. published this content on May 07, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on May 07, 2026 at 18:10 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]