02/03/2026 | News release | Distributed by Public on 02/03/2026 12:29
Our aim with the Sora feed is simple: help people learn what's possible, and inspire them to create. Here are some of core starting principles to bring this vision to life:
Our recommendation algorithms are designed to give you personalized recommendations that inspire you and others to be creative. Each individual has unique interests and tastes so we've built a personalized system to best serve this mission.
To personalize your Sora Feed, we may consider signals like:
We may use these signals to predict if this content is something you may like to see and riff off of.
Parents are also able to turn off feed personalization and manage continuous scroll for their teens using parental controls in ChatGPT.
Keeping the Sora Feed safe and fun for everyone means walking a careful line: protect users from harmful content, while leaving enough freedom for creativity to thrive.
We may remove content that violates ourGlobal Usage Policies. Additionally, content deemed inappropriate for users may be removed from Feed and other sharing platforms (such as user galleries and side characters) in accordance with our Sora Distribution Guidelines. This includes:
Our first layer of defense is at the point of creation. Because every post is generated within Sora, we can build in strong guardrails that prevent unsafe or harmful content before it's made. If a generation bypasses these guardrails, we may remove the sharing of that content.
Beyond generation, the feed is designed to be appropriate for all Sora users. Content that may be harmful, unsafe, or age-inappropriate is filtered out for teen accounts. We use automated tools to scan all feed content for compliance with our Global Usage Policiesand feed eligibility.These systems are continuously updated as we learn more about new risks. If you see something you think does not follow our Usage policies, you can report it.
We complement this with human review. Our team monitors user reports and proactively checks feed activity to catch what automation may miss. If you see something you think does not follow our Usage Policies, you can report it.
But safety isn't only about strict filters. Too many restrictions can stifle creativity, while too much freedom can undermine trust. We aim for a balance: proactive guardrails where the risks are highest, combined with a reactive "report + takedown" system that gives users room to explore and create while ensuring we can act quickly when problems arise. This approach has served us well in ChatGPT's 4o image generation model, and we're building on that philosophy here.
We also know we won't get this balance perfect from day one. Recommendation systems and safety models are living, evolving systems, and your feedback will be essential in helping us refine them. We look forward to learning together and improving over time.