A Blog by Jonathan Low

 

Oct 25, 2025

OpenAI's Deepfake Legal Problem Just Got Lots More Expensive

AI companies have, for the most part, gotten away with stealing the copyrighted content created by others to train their models - and jack their margins. 

But the latest generation of AI visual content models like OpenAI's Sora is leading to the creation of deepfakes showing the likes of OpenAI CEO Sam Altman barbecuing Picachu and SpongeBob cooking crystal meth. The new issue is that the misuse, without permission, of well-known fictional characters is less likely to be accepted by courts than were the previous Big Tech arguments about training and learning. JL

Aaron Mak reports in Politico:

AI recreations based on iconic fictional characters - like SpongeBob and Ronald McDonald - raise new questions, setting the stage for another major clash between Hollywood and Silicon Valley. Copyright has become an issue since the rise of chatbots and gen AI because of the argument over whether large language models can legally be trained on copyrighted texts. That one pits major copyright holders like publishing houses and famed authors against the cash-rich companies inhaling their work. (But) for new visual content generation systems like OpenAI’s Sora 2 and Google’s Gemini, it’s the depictions of characters they’re producing. “Courts might accept copying for learning, but less forgiving when AI models generate recognizable images where infringement risk is likely higher."

For the past month, the internet has become flooded with new AI-generated images of popular characters in highly controversial situations — like SpongeBob cooking crystal meth, Ronald McDonald fleeing the police O.J. Simpson style or Sam Altman barbecuing Pikachu.

The source is OpenAI’s latest video generation model, Sora 2, which was launched at the end of September — and quickly created a new kind of legal headache for tech companies and copyright holders.

These AI recreations based on iconic fictional characters raise a host of new and unsettled questions, setting the stage for another major clash between Hollywood and Silicon Valley.

Copyright has become an electrified issue since the rise of chatbots and generative AI, mostly because of the argument over whether large language models can legally be trained on copyrighted texts. That one pits major copyright holders like publishing houses and famed authors against the fast-moving, cash-rich companies inhaling their work.

This new argument is different. For new visual content generation systems like OpenAI’s Sora 2, Midjourney and Google’s Gemini, it’s their outputs — the depictions of characters they’re producing — that raise more potential problems.

Abdi Aidid, a University of Toronto law professor who studies AI and intellectual property, told DFD, “Courts might accept copying for transformative learning, but they may be less forgiving when AI models generate recognizable [...] images where infringement risk is likely higher[.]”

Meta and Anthropic have been largely successful at dodging their first round of copyright challenges — focusing on the use of books to train chatbots — by raising a fair use defense, which allows people to use copyrighted works for limited purposes in a transformative fashion.  

Now, though, the copyright issues at play are fundamentally different and more complex. Images and video often get broader copyright protections than text does, because they are usually seen as more expressive, which can limit fair use exemptions. That’s likely going to be a distinguishing factor for any judge.

“When you use works to train a model, you're basically using them not for the expression [...] but you're using them as data,” said Pamela Samuelson, a UC Berkeley digital copyright professor who co-directs its law and technology center. When it comes to visual outputs, she said, “There's something much more immediately expressive about graphical works, particularly characters.”

Courts may have to take up other nuanced issues, like how similar these AI-generated images look to the copyrighted originals. In a case from the 1980s, Warner Bros. sued the creator of a TV movie called "The Greatest American Hero" for violating its Superman copyright. The court decided that the movie’s main character wasn’t similar enough to Superman to be infringing, partly because he was “slight of build, nonmuscular, informally dressed, weak chinned and has long blond corkscrew curls.”

That means that while users are clearly using Sora and Midjourney to generate images that look similar to protected characters, judges do have wiggle room when deciding if the images violate the law. “It's got to be really close in order to infringe,” said Samuelson.

Users could also find themselves on the hook. With such an unsettled legal issue, it’s not even clear who exactly is responsible when an AI violates copyright. Is it the developer of the tool, the user or both?

According to Samuelson, courts will likely have to figure out whether older cases concerning videotapes and music file sharing make sense in the age of AI. In a landmark 1984 Supreme Court case, Disney and Universal sued Sony over Betamax — one of the first consumer VCRs. The studios argued that people were using Betamax to record copyrighted movies that were aired on TV, and that Sony should be liable for helping them do it.

The court found in part, though, that Betamax was capable of doing a lot more than what the studios were complaining about. For instance, the devices could record religious and educational programs that weren’t protected by copyright. Similarly, AI image generators can do a lot more than just reproduce copyrighted content, though it’s unclear whether the Betamax logic still holds.

“Whether the generative AI system developer is liable for infringement [is] a kind of untested question at this point,” said Samuelson.

“We’re engaging directly with studios and rightsholders, listening to feedback, and learning from how people are using Sora 2,” OpenAI’s media partnership VP Varun Shetty said in a statement.

0 comments:

Post a Comment