🤖 Just two years ago, AI-generated videos were little more than a curiosity — remember that viral clip of a grotesque Will Smith shoving clumps of spaghetti into his mouth with hands that sported one too many fingers? The technology seemed years away from producing anything convincing.

Fast forward to 30 September 2025, and OpenAI’s new Sora 2 represents a quantum leap in capability. The latest video generation model is “more physically accurate, realistic, and controllable than prior systems, featuring synchronized dialogue and sound effects,” the company boasts. But with that power has come a troubling flood of disturbing, copyright-infringing, and potentially dangerous content that raises serious concerns about whether the company’s safeguards are working at all.

An immediate descent into chaos: Within hours of Sora 2’s release, the app’s social feed filled with copyrighted characters in compromising situations and graphic scenes of violence and racism, reports the Guardian. The videos included bomb and mass-shooting scares with panicked people screaming and running across college campuses, fabricated war zone footage from Gaza and Myanmar, and CCTV footage of crimes being committed… by deepfaked OpenAI CEO Sam Altman. The app skyrocketed to the number one spot in Apple’s App Store within just three days of its limited invite-only release.

Perhaps nothing illustrates the ethical void at the heart of Sora 2 better than what users have done with deceased physicist Stephen Hawking. Videos show Hawking being knocked to the ground by wrestlers in a WWE-style ring, taking blows to the face from a UFC fighter that topples him from his wheelchair, trampled by a raging bull, and attacked by a crocodile, according to Futurism. Hawking was an outspoken critic of AI during his life, saying its development could be the “worst event in the history of our civilization.”

While OpenAI’s safety documentation states the company will take measures to block depictions of public figures, the company allows the generation of historical figures, seemingly creating a loophole for using images of the deceased. Yesterday, late actor Robin Williams’ daughter Zelda took to Instagram to plead for users to stop sending her AI-generated videos of her father. “Stop believing I wanna see it or that I’ll understand, I don’t and I won’t,” she added. “If you’ve got any decency, just stop doing this to him and to me […] It’s dumb, it’s a waste of time and energy, and believe me, it’s not what he’d want.” This sentiment was seconded by Bernice King, daughter of Martin Luther King Jr.

Guardrails that don’t guard: Perhaps most troubling is how easily Sora 2’s supposed safety measures are being circumvented. Every Sora 2 video includes a visible watermark, but according to 404 Media, there are already half a dozen websites that will remove it, with all tested services seamlessly doing so within seconds. Tools that dismantle AI-generated metadata by altering content’s hue and brightness are defeating the C2PA guardrail OpenAI claims provides protection, says cybersecurity expert Rachel Tobac. She warned that AI-generated content without watermarking and metadata could lead to people losing savings in scams, increased disenfranchisement during elections, stock price manipulation, increased tensions between groups, and inspiration for violence or panic.

OpenAI is throwing users under the copyright bus: Videos of copyrighted material from Disney, Pokémon, and other historically litigious media companies immediately flooded the Sora 2 feed thanks to OpenAI’s “opt-out” gambit. According to NYT, shortly before releasing the model, OpenAI reached out to talent agencies and studios alerting them that they would have to opt out if they didn’t want their copyrighted material replicated. After threats of litigation, OpenAI updated their terms of use to place all burden and blame on users, telling Sora account holders that they are liable for ensuring that what they generate doesn’t infringe on any laws, and that they are solely (legally) responsible for any violations.

Want to delete your Sora account? Too bad. Unless you’re willing to go no contact with OpenAI and all of its tools, your Sora account is here to stay. The company is holding users’ ChatGPT access hostage now and forever — deleting Sora means deleting ChatGPT and being banned from signing up ever again.