OpenAI Inc.

10/14/2025 | Press release | Archived content

Expert Council on Well-Being and AI

October 14, 2025

CompanySafety

Expert Council on Well-Being and AI

Introducing the members of the Expert Council on Well-Being and AI and how we'll work together.

Loading…
Share

We've assembled the Expert Council on Well-Being and AI to help guide our ongoing work to build more helpful ChatGPT and Sora experiences for everyone. The eight-person council brings together leading researchers and experts with decades of experience studying how technology affects our emotions, motivation, and mental health. Their role is to advise us, pose questions and help define what healthy interactions with AI should look like for all ages.

Earlier this year, we began consulting many of these experts informally, such as when we were developing parental controls(opens in a new window)and the notification language for parents when a teen may be in distress. As we formalized the new council, we broadened our search to include additional experts in psychology, psychiatry, and human-computer interaction, bringing in new perspectives on how people relate to and are affected by technology. Because teens use ChatGPT differently than adults, we've also included several council members with backgrounds in understanding how to build technology that supports healthy youth development.

We remain responsible for the decisions we make, but we'll continue learning from this council, the Global Physician Network, policymakers, and more, as we build advanced AI systems in ways that support people's well-being.

Meet the council members

We're grateful to this initial group for their deep expertise and shared commitment to making AI supportive and safe.

  • David Bickham, Ph.D.-Research Director at the Digital Wellness Lab at Boston Children's Hospital and Assistant Professor at Harvard Medical School. His work looks at how young people's social media use affects their mental health and development.
  • Mathilde Cerioli, Ph.D.-Chief Scientific Officer at everyone.AI, a nonprofit helping people understand the opportunities and risks of AI for children. With a Ph.D. in Cognitive Neuroscience and a Master's Degree in Psychology, her research focuses on how AI intersects with child cognitive and emotional development.
  • Munmun De Choudhury, Ph.D.-J. Z. Liang Professor of Interactive Computing at Georgia Tech. She harnesses computational approaches to better understand the role of online technologies in shaping and improving mental health.
  • Tracy Dennis-Tiwary, Ph.D.-Professor of Psychology at Hunter College and co-founder and CSO at Arcade Therapeutics. She creates digital games for mental health and explores interactions between technology and emotional wellbeing.
  • Sara Johansen, M.D.-Clinical Assistant Professor at Stanford University and founder of Stanford's Digital Mental Health Clinic. Her work explores how digital platforms can support mental health and well-being.
  • David Mohr, Ph.D.-Professor at Northwestern University and Director of the Center for Behavioral Intervention Technologies. He studies how technology can help prevent and treat common mental health conditions such as depression and anxiety.
  • Andrew K. Przybylski, Ph.D.-Professor of Human Behaviour and Technology at the University of Oxford. He studies how social media and video games shape motivation and well-being.
  • Robert K. Ross, M.D.-A national leader and expert in health philanthropy, public health, and community-based health initiatives. He began his career as a pediatrician and is formerly the president and CEO of The California Endowment.

How we will work with the council members

We kicked off this council last week with an in-person session to deep dive into OpenAI's current work in these areas and to have council members meet the teams they will be working with and advising.

Our work with the council will include regular check-ins on our approach, and recurring meetings to explore topics like how AI should behave in complex or sensitive situations and what kinds of guardrails can best support people using ChatGPT. As an example, for the rollout of parental controls, we consulted individual members to help us prioritize which controls to build first and how best to notify parents if their teen appears to be in distress. Their feedback shaped the tone of the messages we use, so they feel caring and respectful to both teens and the family members.

The council will also help us think about how ChatGPT can have a positive impact on people's lives and contribute to their well-being. Some of our initial discussions have focused around what constitutes well-being and the ways ChatGPT might empower people as they navigate all aspects of their life. We'll keep listening, learning, and sharing what comes out of this work.

Expanding our safety work

Alongside the Expert Council on Well-Being and AI advising on our broader approach to well-being, we're also working with a multidisciplinary subset of mental health clinicians and researchers within the Global Physician Networkto shape our model behavior and policies, and to test how ChatGPT responds in real-world situations. This work spans psychiatry, psychology, pediatrics, and crisis intervention, helping ensure our systems are grounded in clinical understanding and best practices.

We'll have more to share soon about the improvements underway(opens in a new window)to the main ChatGPT model and what we're learning about how it can best support people.

  • ChatGPT
  • 2025

Author

OpenAI

Keep reading

View all
Introducing parental controls

ProductSep 29, 2025

Teen safety, freedom, and privacy

SafetySep 16, 2025

Building towards age prediction

SafetySep 16, 2025

OpenAI Inc. published this content on October 14, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on November 04, 2025 at 15:14 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]