SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Editorial smartphone video creation moderation shields audio waveform

OpenAI details Sora 2 safeguards on likeness & teens

Wed, 25th Mar 2026

OpenAI has outlined the safety measures for its Sora 2 video generation model and Sora app, adding safeguards around provenance, likeness, teen protections, harmful content and audio.

The update explains how OpenAI plans to manage risks as it expands Sora beyond text prompts into tools that can generate video from uploaded images, include synthetic speech and let users create consent-based digital characters built from their own appearance and voice.

Each video generated with Sora carries visible and invisible provenance markers. All Sora videos embed C2PA metadata, an industry standard for content authenticity, and OpenAI also uses internal reverse-image and audio search tools to trace videos back to Sora.

Many outputs also include visible moving watermarks with the creator's name. Videos created from images featuring people will always carry watermarks when shared.

Likeness controls

A central part of the policy covers uploaded images of real people. Users will be able to make videos from photos of family and friends if they attest that they have consent from those featured and the rights to upload the media.

These image-to-video creations face stricter moderation than the character system, previously known as the cameo feature. Images that include children or young-looking people will be subject to tighter limits on what can be generated from them.

OpenAI also described a character feature designed to let users control their own likeness, including both appearance and voice. Users can decide who may use their characters and can revoke access. Videos that include a user's character, including drafts created by others, remain visible to that user for review, deletion or reporting.

Added restrictions also apply to videos that use characters. Users can enable an even stricter setting intended to limit major changes to appearance, embarrassing scenarios and broader inconsistencies in identity.

OpenAI blocks depictions of public figures unless those figures are used through the character feature. The approach reflects continuing pressure on AI companies to prevent impersonation, misinformation and non-consensual synthetic media involving recognisable individuals.

Teen protections

OpenAI also detailed how Sora will handle younger users. Teen accounts will face stronger restrictions on mature material, while the content feed will be filtered to remove material considered harmful, unsafe or unsuitable for their age group.

Adult users will not be able to initiate direct messages with teens, and teen profiles will not be recommended to adults. Parents using ChatGPT controls will be able to manage whether teens can send and receive direct messages and can also choose a non-personalised feed in the Sora app.

Teen users will also face default limits on continuous scrolling. OpenAI did not specify the thresholds, but the measure places Sora alongside other digital platforms that have introduced design changes aimed at reducing compulsive use among younger audiences.

Content filtering

Sora uses a layered moderation system at both the generation stage and in the social feed. At creation, guardrails are intended to block unsafe material before it is produced by checking prompts, generated frames and audio transcripts.

The listed categories include sexual material, terrorist propaganda and the promotion of self-harm. OpenAI said it had tightened policies compared with image generation because video adds realism, motion and sound.

After content is created, automated systems scan material shared to the feed against the company's global usage rules and filter out content deemed unsafe or age-inappropriate. Those systems are updated as new risks emerge, while human reviewers focus on the most serious harms.

Audio checks

Audio generation receives separate treatment in the policy. Sora scans transcripts of generated speech for possible violations and blocks attempts to make music that imitates living artists or existing works.

Its systems are designed to detect prompts seeking such imitation, and OpenAI says it honours takedown requests from creators who believe an output infringes their work. The issue has become a major point of dispute across the AI sector as rights holders challenge synthetic media systems trained on or resembling copyrighted material.

User recourse

Users will choose whether to publish videos to the Sora feed and can remove published content at any time. Every video, profile, direct message, comment and character can also be reported for abuse.

Users can also block other accounts, preventing those accounts from viewing profiles or posts, using a person's character or contacting them through direct messages. Videos are shared to the feed only when users choose to do so.

"You choose when and how to share your videos, and you can remove your published content at any time," OpenAI said.