Join War on the Rocks and gain access to content trusted by policymakers, military leaders, and strategic thinkers worldwide.
Israel’s recent campaign in Gaza marks a turning point in modern warfare: the fusion of counter-insurgency and artificial intelligence. Will Western states, with different traditions of counter-insurgency that emphasize legitimacy and population control, be influenced by Israel’s algorithmic model? This question carries high stakes. If Israel’s approach, which is characterized by automation, scale, and attrition, becomes a template for liberal democracies, it could normalize a form of warfare that values computational efficiency over human judgment.
To answer this question, it is important to understand how Israel has approached countering militant groups and how the war that followed the Oct. 7, 2023 attacks led to the application of more advanced technologies. For decades, Israel has managed Gaza and the West Bank through coercive stability: blockades, surveillance, and calibrated violence designed to deter rather than reconcile. Its security doctrine has prioritized physical and economic control and containment over resolution. The current war amplifies this philosophy through algorithmic systems that accelerate every phase of targeting and reduce deliberation to data.
To prevent automation from eroding the human legal and ethical limits of warfare, I offer five concrete policy measures designed to discipline the technology: tightening export controls, enforcing auditability, re-anchoring proportionality, institutionalizing civilian harm assessments, and codifying international rules for AI in targeting.
Counter-Insurgency and Algorithmic Warfare
Israel’s counter-insurgency practice has long rested on the logic of domination. In the West Bank and Gaza, it fused military pressure, intelligence collection, and economic dependency to suppress threats while avoiding prolonged occupation. Past operations such as Cast Lead in 2008 and Protective Edge in 2014 relied on vast surveillance networks and heavy firepower to achieve deterrence at a high human cost. The pattern was clear: intelligence saturation and kinetic response substituted for political solutions.
The current conflict represents a qualitative leap. AI systems such as Lavender and Gospel, developed by Unit 8200, now automate the identification, nomination, and timing of targets. These programs merge data from phone records, social networks, and intercepted communications to generate kill lists, sometimes with minimal human review. Algorithmic counter-insurgency redefines suspicion as quantifiable risk scores, prioritizing comprehensive surveillance coverage over precise intent analysis. This philosophy erodes core principles of armed conflict law, undermining distinction by expanding combatant definitions through associations and bending proportionality into operational throughput.
This shift enabled an unprecedented tempo of strikes — 15,000 in the first 35 days — resulting in over 67,000 Palestinian deaths by October 2025, with a significant proportion being women and children, amid loosened rules of engagement and contested adherence to international humanitarian law. The exact figures will remain disputed; the scale of harm will not. While Israeli officials insist on human-in-the-loop protocols, reports from whistleblowers and investigations suggest that reviews frequently devolve into rubber-stamping, shifting moral responsibility to algorithms and raising concerns over errors in distinction and proportionality under international humanitarian law.
In this sense, algorithmic counter-insurgency does not replace Israel’s historical doctrine. It perfects it. The tools of occupation and deterrence have evolved into a self-learning infrastructure of perpetual surveillance and precision killing. Whether Western militaries, rooted in different ethical and strategic traditions, adopt this model will determine if Gaza is an exception or a prototype for the wars of the democratic world.
The Case for AI in Targeting
While algorithmic counter-insurgency introduces distinct risks, it is instructive to compare it with human-led targeting in historical operations, which frequently resulted in disproportionately high civilian casualties. For example, during Operation Cast Lead (2008 to 2009) roughly 1,400 Palestinians, of whom approximately 773 were civilians, were killed compared with 13 Israelis. In Operation Protective Edge (2014) about 2,251 Palestinians, including 1,462 civilians, were killed alongside 72 Israeli fatalities. These figures suggest that, in these instances, human-led operations were accompanied by high civilian tolls, influenced by factors such as operational stress, retaliation motives, and expansive rules of engagement.
AI, in contrast, does not suffer from human biases like emotion or exhaustion and can process vast data for more precise identifications — potentially reducing errors if trained on accurate assumptions. Militarily, AI offers utility by enabling efficient targeting at scale, hitting more threats faster with fewer resources. Though absolute casualties are higher in the current conflict due to its intensity, the relative trade-offs suggest AI could yield positive outcomes compared to purely manual approaches, provided rules of engagement are strict and assumptions sound.
Israel is not unique in pursuing this trajectory. The U.S. Project Maven (formally the Algorithmic Warfare Cross‑Functional Team), established in 2017, institutionalized the automation of object detection to accelerate targeting decisions, and recent reporting shows its expanding use across theatres. In China, the People’s Liberation Army is migrating from “informatization” to explicitly seeking algorithmic acceleration of the Observe-Orient-Decide-Act loop and AI‑enabled “kill webs.” In Russia’s war in Ukraine, analysts document a rapid evolution of sensor‑to‑shooter processes and growing use of drones and automation, even as integration and doctrinal challenges persist. The underlying theory across great power militaries is that faster fusion of more data yields better decisions. Gaza shows the missing piece: Speed without understanding is not judgment.
Proponents reply that these are decision support tools, that commanders remain accountable, and that adversaries’ embedding in civilian areas leaves tragic but lawful choices. Even taken at face value, Gaza reveals three structural fragilities inherent to AI’s integration, though some overlap with historical issues amplified by technology.
Compression Risk
As the interval between detection and strike collapses, human review devolves into checkbox compliance, confirmation rather than deliberation. This rapid approval process is a key concern, as whistleblower accounts from +972 Magazine indicate approvals in as little as 20 seconds for AI-nominated targets, often without deep scrutiny of underlying data or alternatives. Such speed can entrench errors, as humans under pressure default to trusting the machine, especially in high-tempo operations where delays risk missing opportunities. Expanding oversight, like requiring secondary reviews for high risk targets or mandatory pauses for residential strikes, could mitigate this, but current practices suggest automation outpaces human capacity for meaningful intervention.
Scale Risk
Mass nomination normalizes lower evidentiary thresholds, especially when operational tempo is measured in effects delivered per day. However, mass nomination predates AI: Historical counter-insurgency often used broad lists based on human intelligence. AI does not inherently lower thresholds but can magnify poor inputs, such as permissive rules of engagement allowing 15-20 civilian deaths per junior militant or assumptions like “military-age male = militant,” which echo U.S. drone practices in Afghanistan (where civilian deaths were initially underreported) and broader counter-insurgency history. If bad assumptions are fed in, AI scales them efficiently — but with better data and models, it could raise standards by demanding more evidence before nomination.
Error Externalization
Model biases, skewed training data, and crude proxies such as “male=militant” generate false positives that are hard to contest inside wartime bureaucracies. The proxy problem is not unique to AI: It has been a human heuristic in conflicts from Vietnam to Afghanistan. Permissive rules of engagement, rather than AI itself, drive much of the harm — though AI’s scale enables more strikes under those rules, potentially increasing total casualties.
Each fragility magnifies civilian harm in dense, data‑saturated cities where modern counter-insurgency unfolds. These are well‑known tendencies in human‑automation interaction, and Gaza is their wartime instantiation.
Gaza also illustrates how this model, once perfected for war, migrates inward. The same architectures that link sensors to shooters in wartime can link cameras to detention squads in peacetime. States with dense closed-circuit television systems, International Mobile Subscriber Identity catchers, biometrics, and social media monitoring can repurpose machine-assisted analytics for domestic security through bulk suspicion scoring, network-based arrests, and predictive policing. Vendors already market such counter-insurgency AI platforms for internal use. Without governance and oversight, algorithmic counter-insurgency abroad risks mutating into algorithmic authoritarianism at home.
Will the West Adopt Israel’s Methods?
The question of proliferation to Western nations hinges on existing technological, military, and economic ties. Israel has long positioned itself as a global exporter of defense technologies, with systems like Rafael’s Fire Weaver, a sensor-to-shooter AI platform, already deployed in North America, Europe, and Asia. Close U.S.-Israel collaboration amplifies this potential: Israeli innovations, including AI-powered warfare solutions, have directly enhanced U.S. military capabilities from tank protection to intelligence analysis. For instance, U.S. tech giants such as Palantir, Amazon, Google, and Microsoft provide cloud and AI services that fuel Israel’s operations in Gaza, creating a feedback loop where battlefield-tested algorithms could refine Western systems. Palantir’s involvement in Lavender’s data mining for target selection is particularly notable, as the company also contracts with the U.S. and U.K. militaries for similar predictive analytics.
Evidence of early adoption is already visible. In the United States, elements of Israeli-developed AI and targeting software have been integrated into counter-terrorism systems, with analysts describing Gaza as a “laboratory” for algorithmic warfare whose lessons are shaping military practice in theatres such as Ukraine. Across the United Kingdom and Europe, Israeli surveillance technologies, most notably facial recognition systems first deployed in the West Bank, have influenced domestic policing and border security architectures. Human Rights Watch and U.N. agencies caution that the opacity of such systems risks unlawful civilian harm and could normalize mass surveillance in liberal democracies, gradually eroding civil liberties under the banner of security. Yet within defense and industry circles, advocates argue that these tools enhance precision, tempo, and decision speed, framing algorithmic targeting as a necessary adaptation to the data-saturated battlefields of modern warfare.
Proliferation appears likely, driven by geopolitical alignments and market incentives. Western militaries face similar pressures to achieve network-centric dominance against asymmetric threats, while Israel’s growing export success, even amid embargoes from states such as Spain, illustrates persistent demand for AI-enabled targeting and surveillance systems. Although ethical and legal debates may slow adoption, and the United Nations has called for new governance frameworks for military AI, shared intelligence ecosystems such as the Five Eyes facilitate technology transfer and doctrinal convergence. If left unchecked, this trajectory could entrench algorithmic decision-making in Western operations, amplifying risks of bias and civilian harm while diminishing human accountability. Public discussions, including those on platforms such as X, increasingly highlight the role of U.S. technology firms — specifically Amazon, Google, Microsoft, and Palantir — in enabling Israel’s AI systems and the potential feedback loop into domestic policing and defense applications. Ultimately, while not yet ubiquitous, the trend points toward diffusion through alliance networks and the perceived efficiency of AI in managing persistent low-intensity conflicts.
What Should Be Done?
The answer is not to ban AI from military use, nor to accept a glide path to automated war. The task is to discipline the technology with law, policy, and verifiable practice, starting with five concrete moves.
First, tighten export controls and end‑use conditions on targeting‑relevant AI. The United States, European Union, and their partners should update Wassenaar‑style controls and Missile Technology Control Regime‑like lists to capture target development software and sensor‑to‑shooter orchestration modules whose foreseeable uses include mass nomination of human targets from bulk personal data. Licenses should require auditable human‑in‑the‑loop thresholds, model‑risk documentation, and civilian harm mitigation plans. Credible evidence of violations, such as casualty audits or rule changes that permit high collateral thresholds, should trigger suspension. The early Gaza experience shows how quickly permissive configurations scale harm.
Second, mandate model and data pipeline auditability wherever AI influences lethal decisions. If an AI score nudges a strike, post hoc reconstructability must be guaranteed via structured logs (features used, confidence intervals, human overrides), bias testing (e.g., “male = militant” proxies), and independent red teaming under realistic adversarial conditions. International humanitarian law cannot be meaningfully applied to black box recommendations. Human Rights Watch’s 2024 Questions and Answers on Israel’s digital tools offers a baseline concern set, and policy should now operationalize those concerns into audit duties.
Third, re‑anchor proportionality to validated military advantage, not throughput. Early phase rules accepting double‑digit civilian casualties to neutralize junior operatives invert proportionality’s burden. Defense ministries should codify default civilian harm ceilings that tighten when model confidence is low or data is stale, require independent legal sign‑off for strikes in residential structures, and bar model‑nominated strikes in family homes absent direct, real‑time hostile activity. Investigations by +972 and later wire services indicate a loosening of rules at the outset. The corrective is to make exceptions costly and traceable.
Fourth, create a standing, independent civilian harm assessment mechanism for AI‑enabled operations. Borrow from U.N. methodologies and high‑quality civil society practice (e.g., Airwars) to field joint fact‑finding cells combining forensic imagery, blast analysis, ground casualty verification, and access to strike logs. Findings should drive immediate tactical adjustments (weapon selection, aimpoint policy) and strategic accountability (compensation, command review). With Gaza’s death toll reported in the tens of thousands, permanent assessment is not a luxury, but a condition for legitimacy.
Fifth, negotiate an AI in targeting addendum in existing humanitarian law forums. Rather than wait for a grand bargain on autonomy, states should use ongoing processes (e.g., Convention on Certain Conventional Weapons meetings, International Committee of the Red Cross expert meetings) to codify minimum obligations when AI contributes to lethal decision‑making. These should include preserved human judgment with meaningful review time, provenance constraints on bulk personal data, independent auditability, and public reporting on civilian harm metrics and remedial actions. The International Committee of the Red Cross’s position on autonomy provides a strong foundation for such an instrument.
Critics will object that none of this binds action in real wars. But policy changes default incentives. Defense bureaucracies respond to what is measured and mandated. If licensing requires audit trails, audit trails will exist. If legal sign‑off cannot be waived for residential strikes, commanders will plan around it. If civilian harm reviews trigger material consequences, targeting cells will adjust behaviors that generate excessive harm. Conversely, if battle‑tested AI is rewarded in the market without conditions, Gaza’s model will spread: first to other battlefields, then into domestic security.
There is a final strategic reason for discipline: Algorithmic counter-insurgency does not solve the problem it claims to solve. The premise is that more data and faster fusion will close the gap that enabled the Oct. 7 attacks. But surprise is political before it is computational: It emerges from organizational blind spots, adversary deception, and the false comfort of metrics that stand in for judgment. Project Maven’s promise of accelerated triage is real, and so too are Israel’s networked command and control gains. Yet Gaza suggests that acceleration can yield not foresight, but faster error. The tragedy is not only civilian death at staggering scale — it is that tools meant to restore intelligence superiority can corrode the moral and political foundations of security itself.
Israel’s campaign will shape procurement and doctrine for a decade. Allies and adversaries are watching. The question is no longer whether AI will infuse targeting — it already has — but whether democracies can harness it without hollowing out the ethical core of the laws of war or the strategic insight that comes from real understanding. The answer lies in policy choices available now: export controls that bite, audits that reveal, rules that restrain, investigations that correct, and international commitments that set a floor under human judgment. Gaza should not be a template. It should be the warning that finally forces governments to bring machine power under disciplined human control.
Whether AI becomes a force for restraint or escalation will depend less on code than on the courage of governments to keep moral judgment in the loop.
Muhanad Seloom, Ph.D., is an assistant professor of critical security studies at the Doha Institute for Graduate Studies and an honorary research fellow at the University of Exeter. He is the author of the forthcoming book Labelling Ethno-Political Groups as Terrorists: The Case of the PKK in Türkiye (Routledge, 2025) and “Veiled Intentions: Hamas’s Strategic Deception and Intelligence Success on 7 October 2023” in Intelligence and National Security (in-press, 2025).
Image: Midjourney