Home Showbiz Dont believe everything you see: celebrity deepfakes in advertising create undetectable fake...

Dont believe everything you see: celebrity deepfakes in advertising create undetectable fake endorsements

6
0

In March 2026, British presenter Ashley James, known for her body-positive advocacy, discovers an advertisement showing her promoting weight loss pills on the set of This Morning. The set, ITV logo, her voice, her expressions, everything is perfectly imitated. However, she never uttered those words. This content is entirely created by artificial intelligence.

Advertisements using deepfakes exploit the trust we place in familiar faces

Advertorial deepfakes rely on a well-established mechanism. Scammers collect hours of public videos of a celebrity, interviews, shows, social media posts. They then train an artificial intelligence model on this data. This model analyzes facial features, expressions, voice, and lip movements. In a few minutes, it generates a video where the personality appears to say anything. These systems not only replicate a person’s appearance but also imitate their behavior over time.

According to an article published by Metro, advertorial deepfakes, these AI-generated fake videos, are now spreading across social media platforms at a pace that platforms struggle to contain. Thus, scammers dress their videos in the colors of recognized media. A television set, a channel logo, a TV news background enhance the illusion of legitimacy. In Ashley James’s case, the fake advertisement reproduced the set and graphic design of This Morning. Indeed, human trust largely relies on visual recognition. Seeing a familiar face speaking in front of a camera is enough to trigger a sense of credibility, even in informed individuals about the risks.

Advertisements using deepfakes cause massive financial losses affecting all targets

This phenomenon goes beyond British borders. According to a study published by cybersecurity company Surfshark, deepfake-related scams led to over a billion dollars in losses worldwide in 2025. This is three times more than in 2024. In France, an 82-year-old retiree lost 350,000 euros. He had watched a fake advertisement featuring actor Jean Reno promoting a trading platform. The video seemed authentic. The scammers had recruited his voice and expressions with precision.

Faced with this phenomenon, Meta has initiated several lawsuits in March 2026 against advertisers based in Brazil, China, and Vietnam. The technique used is called celeb-bait, which exploits the image of public figures through hyper-realistic montages to redirect victims to fraudulent pages. The group claims that its program now covers images of over 500,000 public figures. However, detection tools struggle to keep up. Generation technologies evolve faster than moderation systems.

Advertisements using deepfakes raise a fundamental question about our relationship to images

Beyond financial losses, these contents raise a deeper issue. As any video can be fabricated, visual proof loses its value. Yet, our brains continue to perceive images as a sign of reality. According to cybersecurity experts at McAfee, only 29% of people feel capable of identifying a deepfake. However, some clues can help spot these contents. Misaligned lip movements, abnormal eye blinking, or overly smooth skin texture can reveal the deception. On the other hand, these flaws disappear as generative AI tools progress.

For Ashley James, the lesson is bitter. Her face and voice were used to spread exactly the message she has been fighting against for years, the one that tells women they need to lose weight. Her case has at least alerted the public to a reality that numbers have been describing for a long time. In a world where seeing is no longer believing, digital vigilance becomes a skill as essential as reading or counting.