10/14/2025 | Press release | Archived content
October 14, 2025
CompanySafetyIntroducing the members of the Expert Council on Well-Being and AI and how we'll work together.
We've assembled the Expert Council on Well-Being and AI to help guide our ongoing work to build more helpful ChatGPT and Sora experiences for everyone. The eight-person council brings together leading researchers and experts with decades of experience studying how technology affects our emotions, motivation, and mental health. Their role is to advise us, pose questions and help define what healthy interactions with AI should look like for all ages.
Earlier this year, we began consulting many of these experts informally, such as when we were developing parental controls(opens in a new window)and the notification language for parents when a teen may be in distress. As we formalized the new council, we broadened our search to include additional experts in psychology, psychiatry, and human-computer interaction, bringing in new perspectives on how people relate to and are affected by technology. Because teens use ChatGPT differently than adults, we've also included several council members with backgrounds in understanding how to build technology that supports healthy youth development.
We remain responsible for the decisions we make, but we'll continue learning from this council, the Global Physician Network, policymakers, and more, as we build advanced AI systems in ways that support people's well-being.
We're grateful to this initial group for their deep expertise and shared commitment to making AI supportive and safe.
We kicked off this council last week with an in-person session to deep dive into OpenAI's current work in these areas and to have council members meet the teams they will be working with and advising.
Our work with the council will include regular check-ins on our approach, and recurring meetings to explore topics like how AI should behave in complex or sensitive situations and what kinds of guardrails can best support people using ChatGPT. As an example, for the rollout of parental controls, we consulted individual members to help us prioritize which controls to build first and how best to notify parents if their teen appears to be in distress. Their feedback shaped the tone of the messages we use, so they feel caring and respectful to both teens and the family members.
The council will also help us think about how ChatGPT can have a positive impact on people's lives and contribute to their well-being. Some of our initial discussions have focused around what constitutes well-being and the ways ChatGPT might empower people as they navigate all aspects of their life. We'll keep listening, learning, and sharing what comes out of this work.
Alongside the Expert Council on Well-Being and AI advising on our broader approach to well-being, we're also working with a multidisciplinary subset of mental health clinicians and researchers within the Global Physician Networkto shape our model behavior and policies, and to test how ChatGPT responds in real-world situations. This work spans psychiatry, psychology, pediatrics, and crisis intervention, helping ensure our systems are grounded in clinical understanding and best practices.
We'll have more to share soon about the improvements underway(opens in a new window)to the main ChatGPT model and what we're learning about how it can best support people.
ProductSep 29, 2025
SafetySep 16, 2025
SafetySep 16, 2025