In 1863, Francis Lieber, the Prussian-American jurist commissioned by Abraham Lincoln to codify the laws of land warfare, wrote that no soldier may kill an enemy "who has laid down his arms." War, however brutal, must remain an act performed by a morally responsible agent who can account for what he has done and to whom it has been done. The Lieber Code was imperfect. Its application was racially selective and its humanitarian ambitions frequently betrayed in practice. But its foundational premise survived two world wars, the drafting of the Geneva Conventions and the development of every weapons system from the machine gun to the precision-guided munition. Its premise is that lethal force requires a human being who can be identified, interrogated, and held to account.
Today, AI-powered targeting systems fundamentally break this premise. For the first time in the history of codified warfare, the entity that determines who dies can neither be summoned before a court nor articulate its reasoning. What Lieber assumed to be the permanent condition of armed conflict – a moral agent at the end of the kill chain – has been quietly and deliberately engineered out of existence.
U.S. Central Command (CENTCOM) claimed Operation Epic Fury, the official title of America’s war with Iran, had struck more than a thousand targets within a single 24-hour window. CENTCOM credited AI-assisted systems for the operational tempo.
This new kind of warfare consolidates a dangerous shift in how decisions are made. The most critical choices about who dies, why they die, and what evidence justifies their death are now delegated to AI systems. These systems reason in ways that are structurally opaque, meaning the humans who are supposedly in charge cannot truly understand or explain those decisions.
But a more foundational problem exists. Algorithmic targeting systems do not merely create new risks of error; they produce a new legal environment — a battlefield in which the basis for any targeting decision is by design unknowable to the commanders who authorize it, the lawyers who assess it and the courts that might eventually adjudicate it. The algorithm becomes a generator of epistemic darkness that cannot be cross-examined.
The laws of armed conflict rest on a foundational assumption that a human decision-maker, in attacking a target, can articulate the factual basis for their belief that it was a legitimate military objective. This is the basis of proportionality analysis and individual criminal responsibility. Remove the articulable basis and what remains is a structure of legal norms with no enforcement mechanism. Modern AI targeting systems fracture this foundation in a specific way. Deep learning architectures do not produce decisions accompanied by legible chains of inference. They produce outputs, probability scores, threat classifications, derived from the weighted interaction of hundreds of millions of parameters across opaque training datasets. A 2023 study in Nature Machine Intelligence found that even model developers, given full access to architecture and weights, cannot reliably reconstruct the specific feature combinations that drove any individual classification decision. The officer who approves an AI-generated target is not verifying anything. They are ratifying a recommendation whose premises they cannot inspect.
This is what renders the procurement-to-deployment ethics gap so consequential. The U.S. Department of Defense’s 2020 AI Ethics Principles promise traceability and human judgment. What they fail to explain is what occurs when these requirements come into conflict with operational tempo. Simply put, human beings are incapable of verifying enough targets to carry out 1,000 strikes per day.
The Israeli case provides the best forensically documented example of this. Lavender, which created lists of targets for Israeli strikes in Gaza, was not a rogue program. It was officially approved, and officers were given the right to kill up to twenty civilians per junior Hamas operative the system flagged. The approval processes were cut down to a few seconds of rubber-stamping, according to 972 Magazine. The humans “in the loop” were simply laundering algorithmic decisions with a veneer of authorization.
Also notable here is the diffusion of this paradigm to client states. The literature on AI warfare focuses disproportionately on the competition between the U.S. and China. What it conceals is the way U.S. and NATO AI doctrine, and its underlying assumptions regarding acceptable civilian casualties, is exported to allied militaries under Foreign Military Sales agreements without the ethical supervision framework that purportedly limits it in the home country. In the 2020 Nagorno-Karabakh conflict, Azerbaijan expanded its use of drones, deploying AI-powered Turkish weapons. In Sudan’s civil war, some parties have used AI to help with targeting, and this has sometimes contributed to attacks on civilians, according to a United Nations Panel of Experts . Sudan has also told the U.N. Security Council that the RSF obtained British-origin weapons through the UAE . Some of these attacks involved “signature strikes,” which create targets based on behavioral patterns rather than confirmed individual identification.
All of this is aggravated by the training data problem. Claims that AI targeting is precise rely on an assumption that training data is a true reflection of the categories it claims to characterize. In practice, these systems learn from previous experiences of operation, which codifies the mistakes and strategic assumptions of previous conflicts. In America’s post-9/11 campaigns in Pakistan and Yemen, signature strikes sometimes targeted people on the grounds that they were military-aged males who lived in a particular town. AI trained on those campaigns could repeat these excesses without a human pulling the trigger.
In 2015, the Stimson Center found that "signature strikes" confused civilian and combatant conduct in certain cultures. That conflation has been encoded into targeting AI. Laurie Blank of Emory University calls this "temporal accountability collapse": classification errors happen during model training, years before a strike. By the time someone dies, the causal link between the original human mistake and the civilian death is already obscured.
The international community’s response has largely come from the Convention on Certain Conventional Weapons (CCW) group on lethal autonomous weapons, which has met since 2014. Twelve years later, this group has still not created a binding instrument, definition, or compliance mechanism for AI-powered weapons. Many countries in the Global South want legal restrictions, but Western powers prefer voluntary principles. The process consumes civil society’s energy while delivering no real governance.
Meaningful accountability would require three things: mandatory algorithmic disclosure for post-strike reviews, independent incident investigations with binding access (modeled on ICAO aviation rules ), and treaty-level liability for AI-caused civilian harm. It would also require export controls on AI targeting, with genuine end-use monitoring. If these systems were as precise as their creators claimed, deploying states would welcome such monitoring. The deepest problem with algorithmic warfare is not that it makes killing more efficient. It profoundly undermines the laws of war, making conflict opaque in its reasoning, devastating in its outcomes, and structurally unchallengeable. The algorithm does not become an alibi. It becomes the architecture within which the concept of alibi itself becomes obsolete.