AI video tools are advancing fast, and OpenAI’s Sora 2 is one of the most powerful consumer-facing ones yet. It lets users generate photorealistic short videos from simple text prompts, including inserting people’s faces into new scenes. While many use it for creativity and fun, recent reports show it’s also being misused in dangerous ways — including the creation of videos that depict AI-generated children in unsettling and inappropriate scenarios.
These videos have appeared on social platforms like TikTok and social feeds, prompting widespread concern because they blur the line between fantasy and real footage, and can easily be shared across the internet.
What’s Happening With Sora 2 and AI Misuse
Sora 2 launched with features that allow users to:
Generate entirely synthetic videos from text prompts
Use uploaded video or audio to create an AI “cameo” of themselves
Share generated clips to social media or in an internal feed
The app’s capabilities are powerful, but they’ve also revealed serious misuse patterns. Within days of broader availability, videos featuring AI-generated children in inappropriate or disturbing contexts started circulating online, drawing strong backlash from users and safety advocates.
These clips often masquerade as ads or playful clips but include design elements that resemble problematic content — even if they don’t contain explicit nudity — and they spread quickly due to the app’s social-style feed.
Experts say this problem isn’t unique to one platform. As AI makes realistic video creation easier, it also makes it easier for harmful content to be made and shared, sometimes before moderation can intervene.
Why This Is Serious
Safety Risks
AI-generated content involving children — even when fully synthetic and not real footage — can still normalize harmful imagery, encourage exploitation, or be used in harmful online trends. The potential for misused likenesses, bullying, or harassment increases when anyone can generate highly realistic clips in seconds.
Trust and Misinformation
Highly realistic AI videos — of kids or adults — can make it harder to tell what’s real online. This fuels misinformation, harms reputations, and erodes public trust in digital media. Advocacy groups have raised alarms, saying Sora 2 and similar tools need stronger safeguards or even reconsideration before broader public release.
What OpenAI Says and Is Doing
OpenAI’s usage policies for Sora explicitly prohibit sexual content involving minors and other exploitative material, and the company uses automated and human review systems to detect and remove violations.
The platform also includes identity and content moderation tools designed to block requests that generate harmful or inappropriate output related to children.
However, experts warn that even with filters, the rapid creation and spread of content online makes moderation difficult, and further improvements — both technical and policy-based — are essential to reduce these risks.
How This Fits Into the Broader AI Safety Landscape
Concerns about powerful AI tools and harmful deepfakes are rising across the tech world. Tools like Sora 2 make video creation easier than ever, but they also expose gaps in content moderation and safety guardrails. These challenges span beyond a single platform and include issues such as:
Identity misuse and non-consensual likeness use
Deepfakes in misinformation and scams
Potential exploitation of minors and bullying situations
Difficulties in reliably filtering problematic content before distribution
Regulators, watchdog groups, and digital safety advocates are pushing for stronger practices, including better parental controls, transparent moderation processes, and clearer legal frameworks for AI-generated media.