Home Showbiz YouTube expands its AI deepfake detection tool to celebrities

YouTube expands its AI deepfake detection tool to celebrities

7
0

YouTube is expanding its Hollywood deepfake detection function using artificial intelligence, which could lead to the removal of certain videos featuring celebrities. Behind this expansion, the platform is not only limiting itself to content removal: it is also beginning to structure the management of artists’ synthetic image.

The tool allows public figures to identify on YouTube AI content that reproduces their face, and then request their removal. Not all requests will be accepted, as the review is still governed by the platform’s privacy policy owned by Google, leaving protected uses such as parody or satire.

YouTube initially launched tests with creators last fall, before expanding the program to politicians and journalists in March. The opening to celebrities marks a new milestone, with extensive coverage for individuals who do not have their own YouTube account.

To join the program, participants must provide identification and a video selfie. The detection focuses on the face and does not currently include voice or other identification elements.

However, the system does not offer automatic deletion rights. YouTube indicates that creators already integrated into the program have only requested the removal of a very small number of videos, showing that the tool is as much used to monitor the use of an image as it is to massively remove content.

YouTube is likening this function to Content ID, its tool for identifying copyrighted content. The comparison has a significant limitation: unlike Content ID, appearance detection does not yet allow individuals to monetize third-party videos using their image.

Nevertheless, this is the direction suggested by the entire system. The platform recently announced a feature allowing creators to digitally clone their appearance with AI to insert into videos, while the agency CAA, which supported the program extension, already has a biometric database of its clients to protect or commercially exploit their image.

This logic opens up several scenarios for Hollywood. Some celebrities may want to have IA deepfakes targeting them removed, while others may tolerate fan creations. Part of the industry may eventually accept these uses, as long as the framework is controlled and the revenue is captured.