The armed conflict against Iran launched on February 28 by Washington and Tel Aviv was quickly labeled as the “first AI war.” This claim is misleading in various ways. Not only has AI been heavily used in recent conflicts, including by Israel in Gaza, but, more broadly, AI as a digital tool for data processing and analysis has a long history in armed conflicts, with technical foundations dating back to World War II.
Certainly, the situation in Iran is unique due to the unprecedented level of sophistication of these means and the unprecedented reliance of armies on them. It also differs from the conflict in Gaza in that this time, AI was deployed against a state adversary in a high-intensity war. Moreover, states have never openly communicated so much about their use of these systems. This combined communication, along with the dramatic consequences of some strikes, raises questions about the compatibility of these practices with international law.
Fact Check:
The article discusses the utilization of AI in the recent conflict with Iran, highlighting the implications of AI in warfare and the ethical and legal considerations surrounding its use.
Context:
The use of AI in military operations, as well as the potential consequences and responsibilities associated with it, are explored in this article in the context of the conflict with Iran.
The Facts: AI Utilization in the Iran War
The use of AI by Israel in its conflict with Hamas had been previously revealed by the journal +972. However, in the context of the conflict in Iran, it was the American authorities themselves who announced their use of AI.
American military forces admitted to utilizing AI systems to establish and sort a list of targets at lightning speed. This process reportedly led to over 1,000 highly precise strikes in the first twenty-four hours of the conflict. One of these strikes hit a school in Minab, resulting in the deaths of around 170 civilian victims, mainly children. The United States acknowledged their responsibility for this strike, attributing it to misinformation that led to the authorization of the attack.
Such mistakes are significant, with several media and NGOs pointing out the link between the school and the naval base. This incident raises concerns about the role of AI in warfare and the need for human oversight and ethical considerations in military decision-making.
The Legality of AI Usage
The legality of using AI technology in military operations is called into question in light of international laws governing armed conflicts. The principles of distinction and precaution, which mandate the protection of civilians and civilian objects during warfare, are highlighted in the context of the AI-guided strikes in Iran.
Fact Check:
The article delves into the legal implications of utilizing AI in military actions, especially concerning international humanitarian laws governing the conduct of warfare.
Context:
The discussion surrounds the legal and ethical ramifications of incorporating AI technology in combat scenarios and the responsibilities of state and private entities involved in such activities.
Legal, Political, and Moral Responsibilities
The individual responsibility for carrying out attacks and the accountability of private AI companies are examined, shedding light on potential legal, political, and moral repercussions of these actions in conflict zones.
The role of AI firms in developing faulty systems and the ethical implications of selling AI technology for military purposes are also scrutinized, emphasizing the need for regulatory frameworks and ethical considerations in the AI military sector.
The article concludes by questioning the feasibility and desirability of a legal framework to govern the use of AI in warfare, given the complex interplay of technological advancements, ethical dilemmas, and political motivations in contemporary conflicts.

/2026/04/22/69e884d5d0e71062318296.jpg)



