Home War When artificial intelligence enters the battlefield

When artificial intelligence enters the battlefield

5
0

Une rationalisation de la guerre

“Human beings will always make the final decision on which targets to strike and when to do so. But advanced AI tools can reduce the duration of processes that used to take hours or even days to just a few seconds,” said Admiral Brad Cooper, the commander overseeing the American war effort in Iran, on March 11. He praised the benefits of using artificial intelligence (AI) in Operation Epic Fury, jointly conducted by the United States and Israel in Iran. While he did not specify what “intelligent” systems the Pentagon was using, the Washington Post revealed that Americans were using a mission control system named Maven.

Developed by Peter Thiel’s company Palantir, Maven acts as the brain behind American strikes. Its strength lies in handling the entire “kill chain” process, from target identification to legal approval and strike initiation. The platform sifts through data from satellites, drones, human intelligence, and intercepted communications to carry out its tasks. This allowed the US to attack 1000 targets within the first 24 hours of the conflict, according to the US Central Command (CENTCOM). This marks the first large-scale use of this technology in strikes against Iran, though the US had already used AI in 2024 during bombings in Iraq, Syria, and Yemen.

Gaza as a Laboratory

The use of AI tools in a military context has raised ethical questions in the past. According to reports from various international media outlets, corroborated by a United Nations report, Israel has been employing AI tools since October 7, 2023, to carry out strikes in Gaza.

The Israeli military is reportedly using tools like Gospel to identify buildings and structures to hit, Where’s Daddy? to track targets to their homes, and Fire Factory to assess its real-time ammunition capabilities and assign targets to combat platforms. Additionally, Israel has identified up to 37,000 targets associated with Hamas and the Palestinian Islamic Jihad using an AI tool named Lavender.

An investigation by The Guardian, based on testimonies from Israeli soldiers, suggests that the decision-making logic of these tools remains opaque, with human verification of targets reduced to as little as twenty seconds during operations tracking Hamas leaders. Due to this constraint, operators may mechanically accept the AI’s proposals without the ability to question the machine’s recommendations. Furthermore, AI tools operate under a statistical mechanism that allows for a certain level of civilian casualties based on military objectives pursued, potentially explaining the high number of civilian victims in Gaza.

Ethical and Legal Questions

To highlight the moral limits of using AI in warfare, Professor Olivier Sibony cites an example of a school bombing in Iran that resulted in the deaths of 150 people on February 28. Initial findings from a military inquiry, reported by The New York Times, suggest a database update issue as the cause.

“We do not yet know if it was the AI that made this mistake or if it was human error. If it was the AI, it confirms that these tools are not perfect. Assuming it was a human error, would that be enough to say ‘let’s entrust the control of our missiles and drones to AI’? My conviction is no. Because there is an ethical reason here, but also a legal one,” warned Professor Sibony.

For his colleague Éric Hazan, it is crucial to keep humans in the decision-making loop, requiring updates to military doctrines. He poses the question: “Have we thought about it? Has a military doctrine been established as it has been with other weapons in the past? Today, we have the weapons, but the military doctrine has not been settled, and this may be where the problem lies.”