Home Showbiz YouTube launches a tool to protect Hollywood stars from deepfakes using their...

YouTube launches a tool to protect Hollywood stars from deepfakes using their image

7
0

Deepfakes are growing on social networks, and celebrities struggle to protect their image. Faced with this threat, YouTube now deploys a detection tool for Hollywood stars. This initiative marks a turning point in the fight against AI-generated content.

YouTube opens its anti-deepfakes tool to celebrities

Last February, a video showing Brad Pitt and Tom Cruise in a fight went viral. In this fifteen-second clip, the two actors fight on the roof of a dilapidated building. However, this ultra-realistic scene was entirely fake, created by artificial intelligence.

This video shocked Hollywood. Now, Google’s platform offers a concrete solution. YouTube opens its deepfake detection system to a wide range of personalities.

The system was tested as early as late 2024 with influential creators, politicians, and journalists. It is now available to actors, musicians, and athletes, free of charge on YouTube.

“I see this as a fundamental responsibility. For public figures, celebrities, image and reputation are essential for their income.”

A system inspired by Content ID

The operation remains simple. The personality or their team submits their image to the platform. YouTube then analyzes published videos to identify AI-generated imitations.

Identified content can remain online or be subject to a removal request. The choice belongs to the concerned celebrity. The system is inspired by Content ID, the copyright protection system already used on the platform.

  • Submission of the image by the celebrity or their team
  • Automatic analysis of published videos
  • Detection of AI-generated imitations
  • Choice between keeping or removing the content
  • Parodies and satires allowed according to the rules

Deepfakes become a major industry concern

In a few months, manipulated content has moved beyond curiosity. The viral image of Pope Francis in a puffer jacket made an impact. But recent uses have crossed a much more worrying threshold for the film industry.

Last fall, OpenAI launched Sora, a video generation tool that went viral. Social networks were flooded with videos featuring well-known figures, including Martin Luther King Jr. The company eventually suspended the generation of videos imitating the late civil rights activist. The application has since been shut down.

A few months later, fake videos of fights between stars multiplied thanks to Seedance 2, developed by Bytedance. Charles Rivkin, president of the Motion Picture Association, denounced unauthorized large-scale use of protected works. In the industry, some now speak of a “moment of panic.”

Hollywood does not completely reject these technologies

Despite risks, the American cinema does not close the door to deepfakes. Some agencies like Creative Artists Agency invest in specialized companies. They focus on the creative uses of these tools.

The proliferation of fan trailers using deepfakes is well-received. Pam Abdy, co-CEO of Warner Bros. Pictures, sees it as a sign of public demand. According to YouTube, during the pilot program, creators only requested the removal of a small percentage of flagged content.

The monetization question remains pending

Through Content ID, copyright holders can request the removal, demonetization, or monetization of infringing videos. Revenues can be shared with the uploader. However, this option does not yet exist for deepfakes of celebrities.

Mary Ellen Coe, Chief Commercial Officer of YouTube, currently prioritizes protection. The platform wants to focus on responsibility before considering rights holders and monetization. This cautious approach addresses industry concerns.

Agents, managers, and lawyers are already anticipating future challenges. Alex Shannon, Director of Strategic Development at CAA, highlights the complexity of videos involving multiple talents. The agency even offers a service called “CAA Vault,” which stores images of its clients for potential future monetization.

Misleading or potentially replacing an artist’s work videos may be removed. However, parodies will remain allowed according to platform rules. The line remains blurry for fan creations, such as increasingly common fictional trailers.

Published on