A 2-week deep dive into Sora’s world of AI deepfakes


This recording was made using enhanced software.

Summary

AI for creativity and chaos

OpenAI’s new Sora 2 app lets users generate entire videos using artificial intelligence, blurring the line between real and fake.

Ethical concerns

Experts warn the technology could be misused for deception or explicit content, despite safety guardrails and evolving platform rules.

Accountability gap

As AI tools grow more powerful, questions remain over who’s responsible when the technology causes harm or spreads misinformation.


Full story

OpenAI’s Sora 2, an invite-only platform featuring AI-generated videos, exploded in popularity since its launch. In its first week, the app surpassed 1 million downloads. But as the app grows, so do ethical concerns about its content, all of which is created using artificial intelligence. 

To get a sense for the technology’s potential — and its pitfalls — Straight Arrow News spent two weeks immersed in the platform. Then we took our findings to experts. 

Deepfakes go mainstream

Much of the controversy brewing around Sora 2 is rooted in its potential for creating quick, convincing “deepfakes” — a term that first appeared in 2017, when a Reddit user used AI to swap celebrity faces into videos. Deepfakes used to require advanced skills or technology, but tools like Sora make it possible for anyone to create lifelike clips.

QR code for SAN app download

Download the SAN app today to stay up-to-date with Unbiased. Straight Facts™.

Point phone camera here

Northern Illinois University professor David Gunkel, who studies the ethics of AI, said the danger lies in how real these creations can look. When humans see a photograph or video, their minds often process the image as real.

“We’ve taken images as always being an index of reality,” Gunkel told SAN. “Now, with these generative AI systems, we can generate all kinds of realistic-looking video that is no longer connected to a reality out there in the world.”

That disconnect, he said, opens the door to manipulation and misinformation.

Not every prompt produces convincing content. SAN’s experiments led to mixed results. A sunset cruise looked realistic, while a stint in a chef’s kitchen was likely to set off alarm bells for most users. 

But what users see today will likely change by tomorrow. Or at least the day — or year — after. 

“This technology moves at lightspeed. Law and policy move at pen-and-paper speed,” Gunkel told SAN. “We’re going to be working through this in the next five to 10 years, trying to figure out how to assign responsibility when it involves these generative AI tools.”

The problem with ‘AI fun’

While Sora’s creators added strict safety filters to prevent hyper-realistic or explicit content, not everyone plays by the rules. Some users have already found ways to bypass restrictions to create sexualized videos and even “AI porn.”

Unbiased. Straight Facts.TM

In a 2025 survey, 79% of U.S. adults said they interact with AI almost constantly, or several times a day.

Content creator Madeline Salazar told SAN she has experienced that firsthand.

“Somebody was making all these videos of me and my clones, like hanging out, which I thought was so funny at first,” she said. “And then I see more and more videos, and he’s trying to make my clones make out. You can read their prompts trying to get around these guardrails — because you can get around anything. It’s the internet.”

Salazar, who often experiments with AI in her content, said creators should stay alert.

“Proceed with caution,” she said. “Understand that this is an app created for entertainment purposes, but it’s still super early — and one of the first of its kind. We don’t know where it will go.”

OpenAI’s Sam Altman announced plans to introduce a less-censored version of ChatGPT, allowing erotic material for verified adult users. It’s a major policy change for the platform, which had previously banned such content. 

Altman said the company will operate on a “treat adult users like adults principle.”

Online reaction has been mixed. Businessman Mark Cuban questioned why OpenAI would even take the risk.

“This is going to backfire. Hard,” he wrote on X. “No parent is going to trust that their kids can’t get through your age gating. They will just push their kids to every other [large language model].”

OpenAI pauses MLK Sora videos

Because the likeness of deceased figures isn’t necessarily protected, users have been able to generate Sora videos featuring people like Dr. Martin Luther King Jr. and Robin Williams.

OpenAI said in a Thursday statement after SAN’s video was produced that it had been strengthening guardrails around Sora — particularly when it comes to depicting historical and deceased public figures. In the time being, the company is pausing all Sora generations featuring King after some users created disrespectful depictions of the civil rights leader.

In a joint statement with the King estate, OpenAI said its work to ensure families and representatives have more control over how their likenesses are used. OpenAI thanked King’s daughter, Dr. Bernice King, who reached out on behalf of the estate.

An evolving art form

Despite the risks, experts agree the rise of generative AI is part of a natural progression. Gunkel compared it to earlier creative revolutions — from photography to hip-hop sampling — that transformed how people produce art.

“It’s not so much a turning point as it is an evolution,” Gunkel told SAN. “Since the invention of photography, we’ve been using representations not only to reflect reality, but to create it.”

Salazar believes the same technology that makes deepfakes possible can also democratize creativity.

“You can breathe life into entertainment and content creation,” she said. “The amazing thing about AI is now it has given content creators the ability to produce entertainment-level content. We should be thrilled.”

Balancing innovation and accountability

As Sora 2 and generative AI advance, questions persist about who should be held responsible if — and perhaps when — AI crosses the line.

“Usually, when you use a tool to do something, it’s the user of the tool, not the tool or the manufacturer, who is held accountable,” Gunkel said. “But we’re seeing this as a moving target.”

Gunkel pointed to a recent lawsuit against Air Canada, spurred by a passenger who followed incorrect information given by the airline’s chatbot. 

“Air Canada argued it wasn’t responsible for what the chatbot said, because it’s its own legal entity,” he said. “You can already see how this opens the door for people to dodge responsibility.”

For now, Sora 2 is still invite-only as people experiment with the tech that’s redefining creativity. For an in-depth look at what SAN experienced during our two-week test, watch our video story.

Cassandra Buchman (Weekend Digital Producer) contributed to this report.
Tags: , , , ,

SAN provides
Unbiased. Straight Facts.

Don’t just take our word for it.


Certified balanced reporting

According to media bias experts at AllSides

AllSides Certified Balanced May 2025

Transparent and credible

Awarded a perfect reliability rating from NewsGuard

100/100

Welcome back to trustworthy journalism.

Find out more

Why this story matters

Widespread access to AI-generated video platforms like OpenAI’s Sora 2 is raising urgent questions about digital authenticity, creative potential and the challenges of regulating emerging technologies in real time.

Deepfakes and misinformation

Experts warn that tools such as Sora 2 make it easier for anyone to create realistic deepfake videos, increasing the risk of manipulation and misinformation in media and society.

Ethical and legal responsibility

The story highlights ongoing debates about who is accountable for AI-generated content—whether it is users or platform creators—underscoring the difficulty of assigning blame and enforcing regulations as technology evolves.

Creativity and democratization

AI’s ability to generate high-quality content empowers creators and broadens access to digital art and entertainment, raising both opportunities and concerns about the future of creative industries.

SAN provides
Unbiased. Straight Facts.

Don’t just take our word for it.


Certified balanced reporting

According to media bias experts at AllSides

AllSides Certified Balanced May 2025

Transparent and credible

Awarded a perfect reliability rating from NewsGuard

100/100

Welcome back to trustworthy journalism.

Find out more

Daily Newsletter

Start your day with fact-based news

Start your day with fact-based news

Learn more about our emails. Unsubscribe anytime.

By entering your email, you agree to the Terms and Conditions and acknowledge the Privacy Policy.