Home War Misinformation and deepfakes: AI changes the game of war in Iran

Misinformation and deepfakes: AI changes the game of war in Iran

9
0

The war in Iran has highlighted how the use of artificial intelligence (AI) in video production can influence public perception during peak information consumption periods, as countries involved in the conflict also seek to shape their own narratives.

But this phenomenon can have a strong emotional impact in countries directly engaged in the war, leading their governments to implement strict lockdown measures.

Easy and inexpensive access to AI-based video technologies has flooded social networks with AI-generated videos and photos showing combat, impacts on civilian areas, or statements since the beginning of the war in Iran, fueling misinformation that can have a significant influence on the perception of the war and the reality on the ground.

“Spectacular images and videos, supposed to show real-time combat scenes and missile strikes, invade social media news feeds, spread quickly, and deceive millions of people,” explains Marc Owen Jones, a media analysis professor at Northwestern University in Qatar, about how the war unfolds online.

The Digital Battlefield

Jones, a specialist in social media influence, misinformation, and online politics on public opinion, believes that social networks have become a battleground for competing narratives in this conflict, with all parties and their supporters now seeking to win “hearts and minds” online.

From the American side, Jones mentions “videos interspersed with excerpts from Hollywood films, a kind of memeification of communication designed to seduce an extreme right-wing aesthetic that rejects empathy in favor of humiliation.”

On the other hand, “Iran is gaining power, often mocking the United States with its memes, but many AI-generated images seem to exaggerate Iranian military successes, possibly to increase pressure on Gulf countries to push for de-escalation,” he adds.

Deepfakes Generated by AI

Advancements in artificial intelligence make misinformation easier to produce and more convincing. AI tools can be used by anyone to create high-quality videos, images, and audio recordings in seconds.

Examples include videos purporting to show the American aircraft carrier USS Abraham Lincoln burning at sea. The videos were so convincing that President Donald Trump claimed to have called his generals to verify their authenticity.

Trump later spoke on his Truth Social platform, stating: “Not only was it not burning, but it was not even targeted, Iran knows better than to do that!”

Other examples include videos debunked as showing American soldiers in tears or destroyed buildings in Gulf cities.

“The use of AI is widespread and becoming increasingly difficult to detect,” observes Jones.

Speed and Verification

The speed at which content spreads online complicates verification for the general public.

“In a rapidly evolving conflict, verified information often arrives late, creating a void immediately filled by misinformation,” explains Jones. “When people are worried, they crave information, but this information is often false.”

Unverified content can reach millions of people within minutes, and the public is entrusted with the delicate task of verifying content that is often very realistic or spread across multiple platforms.

Viral Rumors

In addition to AI-generated combat images, rumors spread widely last week claiming that Israeli Prime Minister Benjamin Netanyahu was dead.

Some users noted visual discrepancies in a low-quality video posted on March 13 by Netanyahu’s office. They claimed Netanyahu appeared to have six fingers on one hand, a revealing sign of AI usage.

“Rumors of Netanyahu’s death were accompanied by accusations that his speech was actually an AI-generated video,” says Jones.

Netanyahu then released several “proof of life” videos to dispel the rumors. However, speculations about his death persist online.

Bots and Coordinated Campaigns

Some content circulating online could be part of coordinated campaigns aimed at diverting attention, convincing, or influencing public opinion.

“We see shady and anonymous accounts, with histories of multiple name changes and no discernible identity, relaying false information and AI-generated videos,” explains Jones.

These accounts may appear credible but are often linked to state-supported actors or individuals seeking to profit from sensationalized content.

In some cases, automated accounts, or bots, amplify certain narratives by sharing and commenting on posts, giving the impression that they are more widely shared than they actually are.

Parody and Satire

Not all AI-generated videos are intended to deceive. Some are intentionally designed as parodies or satires.

These clips often ridicule or imitate world leaders like Trump and Netanyahu but can still be mistaken for real videos.

According to Jones, “AI-generated deepfakes have reached a critical threshold, with the subtle flaws that allowed them to be detected disappearing, and this technology is now accessible to anyone with a smartphone.”

Examples circulating online include a video depicting Trump as the new supreme leader of Iran and clips showing Netanyahu as a malfunctioning robot or with multiple fingers.

As conflicts evolve rapidly, this type of videos can detach from their original context and spread very quickly.

Erosion of Trust

The proliferation of misleading information online makes it increasingly difficult for the public to distinguish between truth and falsehood.

“False information can spread up to ten times faster than accurate information on social media, and corrections are rarely seen or believed as much as the initial false claim,” observes Jones.

“Outrage leads to sharing before fact-checking has time to take place, and this is exactly what malicious actors are seeking,” he continues.

Jones believes that spectacular images should be approached with the same caution as unverified information.

“As they look real, it is no longer sufficient proof of their accuracy,” he adds.

As the conflict continues, the battle also rages on social media, leaving ordinary citizens to navigate a complex mix of misinformation, satire, and manipulated content on their own.