By Dr. Tawhid CHTIOUI, Founding Presidentof aivancity School of AI & Data for Business & Society; selected by Keyrus as one of the 25 most influential global figures in the field of AI and data (January 2025).
War has always hinged on a profoundly human decision: identifying the enemy, assessing the threat, and deciding to strike. Even as weapons have become more sophisticated, and even as technology has transformed the battlefield, that decisive moment—when reality is interpreted and the choice to use violence is made—has remained a “human” act.
Today, a new question arises: What does war look like when the analysis leading up to a decision is generated by algorithms? What does a military strike mean when it is based on a statistical model trained on massive amounts of data? How do we understand an error when the identification of a target results from a probabilistic calculation? And more broadly, what becomes of human responsibility when artificial intelligence systems begin to shape the way armies perceive, interpret, and prioritize the battlefield?
These issues are no longer the stuff of science fiction. They are now at the heart of contemporary conflicts.
On February 28, 2026, as U.S. and Israeli airstrikes against Iran began, a missile struck a complex in the southern city of Minab. Among the buildings destroyed was the Shajareh Tayyebeh Elementary School, which had been attended by hundreds of children that morning. The attack sent shockwaves around the world. Very quickly, beyond the human tragedy, a broader question emerged in public debate: several analyses pointed to the use of artificial intelligence systems in the planning and selection of targets for this military campaign.
In the days that followed, two conflicting narratives emerged. For some, this event signaled the dawn of a new era in which machines would decide the fate of humans. For others, however, artificial intelligence would improve the precision of military operations and reduce human error.
Between these two often exaggerated interpretations, however, a fundamental question remains largely obscured by technological confusion and the emotion of the moment: what role does artificial intelligence actually play in modern warfare?
For the very term “artificial intelligence” now encompasses an extremely diverse range of technical realities. In public discourse, it oscillates between two simplified portrayals: that of an automated war dominated by autonomous machines, and that of a technology capable of delivering strikes with almost surgical precision. The reality is more complex.
The aim of this op-ed is therefore not to comment on a specific episode of the conflict or to contribute to the political polarization it has sparked. Rather, it is simply to provide an analytical perspective on the actual role of artificial intelligence in contemporary military systems, in order to understand what these technologies are transforming—and what they are not transforming—in the very nature of war.
For a profound transformation does indeed appear to be taking shape. War is no longer waged solely by humans aided by machines. It is beginning to be shaped by algorithmic architectures capable of analyzing vast amounts of data, identifying patterns, prioritizing targets, and recommending operational actions.
In other words, machines are not yet waging war in place of humans. But they are now playing a role in the cognitive organization of the battlefield.
To understand this transformation, we must first dispel a key misconception: contrary to what is often suggested in public discourse, the current war is not a fully automated war.
This is precisely the point that needs to be addressed first.
I. The Illusion of an Automated War
The first challenge when discussing artificial intelligence in modern warfare stems from a profound misunderstanding: the widely held belief that today’s conflicts are already being waged by autonomous machines capable of deciding on their own to strike.
This portrayal owes as much to science fiction as it does to the contemporary fascination with so-called “smart” technologies. It fuels the image of a battlefield dominated by military robots, where algorithmic systems would decide for themselves when to use lethal force.
The reality is quite different.
In the vast majority of current military operations, the decision to strike is still formally made by humans. Artificial intelligence systems do not function as autonomous agents capable of deciding to kill. Their role lies upstream of the decision-making process, in the phases of analyzing, interpreting, and prioritizing information.
Yet it is precisely at this level that the transformation is most profound.
Modern warfare has become an information-driven phenomenon on an unprecedented scale. Observation satellites, drones, electronic sensors, communications intercepts, and surveillance systems now generate enormous volumes of data. In some theaters of operation, several million images and signals can be collected every day.
The strategic challenge, therefore, is no longer simply to obtain information. It is to quickly understand what that information means.
It is against this backdrop that artificial intelligence has gradually become a key tool for the military. Machine learning systems make it possible to analyze satellite imagery, detect suspicious structures, identify unusual patterns in data streams, and cross-reference intelligence databases from multiple sources.
In other words, artificial intelligence now acts as a cognitive accelerator for military intelligence. It does not replace human decision-making, but it profoundly transforms the way in which those decisions are prepared.
In operational practice, these systems do not take the form of a “central AI” that commands the battlefield. They are part of complex information architectures that interconnect sensors, databases, analytical models, and decision-making interfaces. Military doctrines now speak less of a “kill chain”—a linear chain of detection and attack—and more of a “kill web,” a distributed architecture in which information flows continuously between different systems and actors.
The algorithm does not launch the attack. It helps to map out the battlefield, prioritize threats, and speed up operational planning. This development explains why some militaries now claim they can conduct operations with far fewer personnel than before. In some cases, a few dozen analysts supported by algorithmic systems can accomplish work that once required hundreds, or even thousands, of intelligence specialists.
But this transformation also introduces a new ambiguity.
As artificial intelligence systems organize the flow of information and identify potential targets, military leaders no longer have a direct view of the battlefield. Instead, they see a representation of it that has already been filtered, analyzed, and structured by statistical models.
War does not, therefore, become automated.
It becomes algorithmic in the way it is perceived and interpreted. And it is precisely in this shift that the most profound transformation lies. For as the interpretation of reality increasingly relies on algorithmic systems, another change begins to emerge: a change in the way decisions are made.
II. The Real Shift and the Risk of Cognitive Delegation
Perhaps the most profound transformation brought about by artificial intelligence in military systems lies neither in weapons nor in the automation of operations. It lies elsewhere, in a more subtle but potentially far more fundamental shift: the way humans make decisions.
Traditionally, military decision-making has been based on a relatively clear process. Decision-makers receive intelligence information, analyze it, weigh it against their experience and judgment, and then make a decision. Artificial intelligence does not eliminate this process. But it is gradually shifting the balance.
As the volume of available data grows (satellite imagery, drone footage, electronic signals, surveillance data, etc.), it becomes increasingly difficult for human analysts to interpret this information directly. Algorithmic systems are therefore emerging as indispensable tools for detecting correlations, identifying anomalies, and prioritizing relevant information. This development marks a subtle yet fundamental shift.
Initially, the algorithm functions as a support tool. Analysts review the data, use the available tools, and retain control over the interpretation. But when the volume of information becomes too great, the dynamic can gradually shift.
Decision-makers no longer start directly with the data. They start with the interpretation already generated by algorithmic systems.
The decision-making process can then imperceptibly shift from a situation where decisions are based on data to one where the interpretation proposed by the algorithm is simply validated.
This shift may seem minor. Yet it profoundly transforms the nature of decision-making. This is because military leaders no longer have a direct view of the complexity of the battlefield. Instead, they perceive a representation of it that has already been filtered, structured, and prioritized by statistical models. The algorithm does not make the final decision. But it helps define the range of possible decisions.
In this context, the major risk is not that of machines becoming fully autonomous—a scenario that remains largely hypothetical—but rather the emergence of a gradual cognitive dependence on algorithmic systems.
The more capable models become, the more reliable they are perceived to be. And the more reliable they are perceived to be, the more humans may be tempted to rely on their recommendations. This dynamic is well documented in the literature on automated systems. Researchers refer tothis as “automation bias”—that is, the tendency of human operators to place excessive trust in the recommendations generated by algorithmic systems.
In a military context, this development raises a fundamental question: when our understanding of reality increasingly relies on algorithmic systems, who is actually thinking about the battlefield?
Artificial intelligence does not yet replace human decision-making. But it is already helping to shape the way those decisions are made. And this is perhaps where the most profound transformation of modern warfare is taking place.
III. Probabilistic Warfare
One of the most significant—and least understood—effects of artificial intelligence in military systems stems from the very nature of these technologies. Contrary to a widely held belief in public discourse, AI is not an infallible machine capable of producing certainties. Rather, it operates on a fundamentally different principle: probabilistic estimation.
Machine learning models do not determine with certainty that an object is a military target. They assess the probability that it is one based on correlations detected in the data used to train them. A system may thus estimate that a building, vehicle, or individual matches a military profile with an 85% probability. In many civilian fields, such a probability level may be considered sufficient to guide a decision. But in war, the remaining 15% is not merely a statistical margin. It can mean human lives…
This feature brings about a subtle yet profound transformation: as artificial intelligence systems become more widespread in military infrastructure, battlefield analysis is increasingly expressed in terms of probability. Threats are no longer merely identified; they are assessed, calculated, and weighted.
In other words, modern warfare is sometimes beginning to unfold in a new realm: that of probabilistic warfare.
This probabilistic approach is based on a core mechanism of machine learning systems: the detection of patterns in data. Algorithms learn to recognize correlations between certain behaviors, infrastructure, or information configurations and activities deemed suspicious or hostile. However, these correlations do not always correspond to causal relationships.
A model may link the presence of certain types of vehicles, nighttime activity in a building, or the location of infrastructure to military activities simply because these characteristics appeared frequently in the training data. In reality, however, these same indicators may have entirely civilian explanations.
In a military context, this limitation is no small matter. For a misinterpreted correlation does not merely result in a technical error; it can lead to the misidentification of a target.
In addition to this vulnerability, algorithmic systems have another characteristic: their ability to significantly speed up decision-making.
Artificial intelligence technologies are designed to rapidly analyze vast amounts of data and generate operational recommendations in a matter of seconds. In certain strategic contexts, this speed is seen as a decisive advantage. The ability to identify a threat and respond more quickly than the adversary can indeed be a determining factor in a conflict. But this acceleration also carries a risk.
The faster a decision needs to be made, the harder it becomes to verify the interpretations generated by algorithmic systems. Human analysts have less time to examine the data, question the model’s assumptions, or cross-check the results against other sources of information.
Under these circumstances, an algorithmic error can spread more quickly through the decision-making chain. Speed, which is one of the main advantages of artificial intelligence, can then become a factor that amplifies risk.
This is where the central paradox of warfare in the age of artificial intelligence comes into play. On the one hand, algorithmic systems can help improve the precision of certain military operations. By analyzing vast amounts of data and detecting signals that are invisible to human analysts, they can help reduce certain misinterpretations and refine target selection.
On the other hand, these same systems can produce errors on a large scale when their models are poorly calibrated, when their training data is biased, or when their statistical correlations are misinterpreted. In other words, artificial intelligence can both reduce certain human errors and introduce new forms of algorithmic errors.
The question that arises is no longer merely a technological one. It becomes a profoundly ethical one.
For when the identification of a target is based on a probabilistic calculation—and when that calculation may be flawed—the responsibility for the decision does not disappear. It simply becomes more difficult to pinpoint within the chain of human and technical decisions that lead to the use of force.
And it is precisely this difficulty that poses one of the major challenges of warfare in the age of artificial intelligence.
IV. Algorithmic Error and the Diffusion of Liability
At the heart of the debate over artificial intelligence in warfare often lies a technical question: Are algorithms reliable enough to be used in military operations? But this question, in reality, is not the most important one. The real issue lies elsewhere. It concerns the very nature of the decision to kill.
Historically, even as military technologies became more sophisticated, this decision always involved direct human responsibility. An individual (an officer, a pilot, a commander) had to assess a situation and assume moral responsibility for the use of force.
Artificial intelligence is bringing about a new transformation: algorithmic mediation between our perception of the world and human decision-making.
The final decision remains strictly human. But the interpretation of reality that precedes it is increasingly generated by algorithmic systems. It is in this mediation that one of the major ethical challenges of warfare in the age of AI lies today.
In contemporary military architectures, the decision to strike may result from a complex chain of interactions among multiple actors and technical systems. Human analysts interpret intelligence data. Targeting algorithms identify correlations and propose potential targets. Decision-support systems prioritize operational objectives. Finally, military operators carry out the action. In this type of socio-technical environment, accountability does not disappear. But it can gradually become more diffuse.
The analyst may consider that they are relying on results generated by an algorithmic model. The decision-maker may believe that they are validating a recommendation from a technically reliable system. The algorithm’s designers may point out that their tool is merely an analytical instrument and not a decision-making system. Yet the final decision has indeed been made…
This phenomenon is often described in the literature on the ethics of autonomous systems as a fragmentation of the chain of responsibility, where an action results from the interaction of multiple human and technical actors rather than from a clearly identifiable decision.
However, a conceptual misunderstanding often arises in public debate. When an error occurs in a system powered by artificial intelligence, we sometimes hear people say that “AI killed someone.” This phrasing is, however, misleading.
Attributing moral responsibility for death to a machine is, in fact, a case of miscategorization. Artificial intelligence is not a moral agent capable of making decisions. It is a technical infrastructure that organizes, filters, and prioritizes the information on which humans base their decisions.
The relevant question, then, is not whether AI is responsible, but rather how responsibility is distributed within the system that uses it. Who designed the models? Who configured them? Who approved their operational use? Who interpreted their results? And, ultimately, who gave the order to strike, and with what level of acceptable uncertainty?
Moral philosophy describes this type of situation as the “many hands problem ”: when an action results from a complex chain of actors and systems, it becomes difficult to clearly identify who should be held accountable for its consequences.
Yet this challenge can have significant political implications. For artificial intelligence creates a constant temptation: that of shifting responsibility by citing the technical complexity of the system—“it’s the algorithm”—while simultaneously claiming the strategic power that these technologies provide.
It is precisely this paradox that societies will have to confront as artificial intelligence becomes increasingly integrated into the structures of modern warfare.
As Peter Singer demonstrated in *Wired for War* (2009), modern military technologies tend to gradually distance human actors from the direct consequences of the violence they inflict. Artificial intelligence could further this trend by introducing a new form of distance: a cognitive gap between the human reality of conflict and the statistical representations that guide decision-making.
The question raised by war in the age of AI is therefore not merely whether machines can make military decisions. It is about what becomes of human responsibility when the interpretation of the world that precedes the decision to kill is generated by algorithmic systems.
This is likely where the most profound ethical issue of algorithmic warfare lies.
Conclusion: The Line Algorithms Should Never Cross
The fundamental issue with artificial intelligence in warfare may not be the power of the algorithms. Rather, it lies in our collective ability to manage their consequences.
Because artificial intelligence does not eliminate war. It transforms it.
It speeds up the decision-making process. It makes conflict more data-driven. It relies on systems capable of analyzing vast amounts of data and detecting correlations invisible to the human eye to interpret the battlefield.
In other words, warfare is becoming faster, more complex, and increasingly reliant on the algorithmic frameworks that now shape intelligence gathering and strategic analysis. But at the heart of this transformation lies a fundamental question.
Can the decision to kill become merely the result of a statistical calculation?
War has always been a profoundly human endeavor. It involves irreversible choices, moral responsibilities, and consequences that cannot be reduced to mere probabilities. As machines play an increasingly significant role in analyzing and organizing the battlefield, the real challenge is therefore not merely technological.
He is ethical.
The current debate often focuses on the legal frameworks that should govern the use of artificial intelligence in military systems. These questions are obviously necessary. But they may not be the most fundamental ones. The law can define rules, set limits, and assign responsibilities. It cannot, on its own, address the moral questions raised by the increasing delegation of military analysis to machines.
The question raised by war in the age of artificial intelligence goes beyond mere legal compliance of systems. It concerns the way in which human societies wish to continue to exercise or delegate one of the most serious prerogatives there is: the power to decide on the use of violence.
In the 21st century, humanity will likely have to answer a question that goes far beyond the military sphere: to what extent are we willing to delegate to the machines we have created our ability to interpret the world when it comes to deciding whether to destroy?
Because governing artificial intelligence in warfare does not simply mean mastering a technology. It means preserving, at the very heart of even the most advanced systems, human accountabilityfor the consequences of violence.
References
- Marcus, G., & Davis, E. (2019). Rebooting AI: Building Artificial Intelligence We Can Trust. Pantheon.
- Scharre, P. (2018). Army of None: Autonomous Weapons and the Future of War. W.W. Norton.
- Singer, P. W. (2009). Wired for War: The Robotics Revolution and Conflict in the 21st Century. Penguin Press.
- Cummings, M. L. (2004). Automation bias in intelligent decision support systems. AIAA Conference.
- International Committee of the Red Cross. (2021). Autonomous Weapon Systems: Technical, Military, Legal, and Humanitarian Aspects.

