Keywords
Artificial Intelligence, Military Targeting, Autonomous Weapons, Israeli Defense Forces, Iranian Military Technology, International Humanitarian Law
This article is included in the Artificial Intelligence and Machine Learning gateway.
The integration of artificial intelligence (AI) algorithms in military targeting systems represents one of the most significant technological shifts in modern warfare, fundamentally altering the speed, precision, and scale of military operations. In the context of the Iranian-Israeli conflict, both nations have emerged as early adopters of AI-enhanced military technologies, albeit through markedly different strategic approaches and technological implementations.
This study employs a mixed-method analytical approach that combines technical system assessment, comparative case study analysis, and evaluation of the international legal framework. Data collection utilized cross-checking methodology across multiple source types, including investigative journalism, open-source intelligence (OSINT) platforms, military data, and peer-reviewed studies to address three core research questions regarding technical architecture differences, operational effectiveness metrics, and compliance with International Humanitarian Law.
The analysis reveals substantial differences in both approach and capability. Israeli systems exhibit advanced data integration and surveillance architectures, emphasizing precision and human machine collaboration through platforms such as the Iron Dome, Lavender, and various targeting support technologies. In contrast, Iranian systems prioritize cost-effectiveness and asymmetric capabilities, focusing on autonomous drone operations and cyber warfare integration, albeit with limited independently verified performance metrics. Notable divergences exist in operational doctrine, human oversight mechanisms, and legal compliance frameworks. While field results reveal that the use of artificial intelligence technologies in warfare must be subject to international law to avoid significant civilian casualties.
While both countries have incorporated AI technologies into military operations, critical gaps in international governance underscore the urgent need for robust regulatory mechanisms to oversee autonomous weapon systems. These findings contribute to academic understanding of military AI implementation patterns and inform policy debates on the ethical and legal dimensions of algorithmic warfare
Artificial Intelligence, Military Targeting, Autonomous Weapons, Israeli Defense Forces, Iranian Military Technology, International Humanitarian Law
The integration of artificial intelligence in military targeting systems represents one of the most significant technological shifts in modern warfare, fundamentally altering the speed, precision, and scale of military operations (Gul et al., 2020). In the context of the Iranian–Israeli conflict, both nations have emerged as early adopters of AI-enhanced military technologies, albeit through markedly different strategic approaches and technological implementations. The Israeli Defense Forces (IDF) have developed sophisticated AI platforms capable of processing vast datasets to identify, track, and engage targets with unprecedented accuracy Systems such as Lavender, which reportedly analyzed data on 2 3 million Gaza residents and generated up to 37,000 potential targets, exemplify a paradigm shift toward algorithmic warfare (Guardian, 2024). In contrast, Iran has pursued an asymmetric strategy, leveraging AI for drone swarm coordination, cyber warfare enhancement, and autonomous weapon systems designed to counter conventional military superiority (Gilli & Gilli, 2019). The expansion of AI-enabled military systems raises critical issues related to operational effectiveness, strategic stability, and compliance with international humanitarian law. Recent developments in the military capabilities of both Israel and Iran highlight a fundamental difference in approaches to integrating AI, the implications of which extend beyond regional security to global governance frameworks for autonomous weapons systems.
This technological arms race unfolds in a complex geopolitical landscape where both countries face existential security concerns, resource limitations, and international judgment. The integration of AI into targeting systems raises fundamental issues related to accuracy, responsibility, and compliance with international humanitarian laws, especially those concerning civilian safety and the need for human oversight in deadly decision-making processes. This paper aims to provide a detailed overview of how AI algorithms can be used in targeting systems within the Iranian-Israeli conflict, examining technical details, operational effectiveness, ethical and human implications, and strategic impacts of such applications. By analyzing existing data on system deployment and operational patterns, the study aims to address critical gaps in our understanding of AI’s role in contemporary warfare and its broader impact on regional and global security.
RQ1: What are the main differences in the technical architecture and operational mechanisms of AI-enhanced targeting systems used in Israeli and Iranian military applications?
RQ2: How do the operational efficiency indicators and strategic impacts of these systems differ, and what are their implications for regional security and stability?
RQ3: To what extent do current military AI implementations adhere to international legal standards, particularly international humanitarian law and established governance frameworks?
The primary objectives of this research are to: (1) provide a systematic comparative analysis of documented AI military systems in Israeli and Iranian defense applications; (2) evaluate the technical, operational, and strategic characteristics of these systems within their respective doctrinal contexts; (3) assess compliance with international legal frameworks governing autonomous weapons; and (4) identify implications for regional security stability and global governance mechanisms.
This study employed a hybrid qualitative–quantitative analytical framework to investigate the role of artificial intelligence algorithms in military targeting systems within the Iranian–Israeli conflict. The methodology integrates technical system analysis, comparative architecture evaluation, and ethical–legal framework synthesis. Academic research in the field of military artificial intelligence has undergone a remarkable shift, moving from initial theoretical discussions about autonomous weapons to empirical studies focusing on systems used in practice and their operational implications. In this context, King (2024) proposes a conceptual framework that distinguishes between the uses of AI in data analysis, decision support, and autonomous execution, noting that the predominant use of AI in modern military contexts is primarily to enhance intelligence analysis capabilities, rather than to automate combat decisions. This view challenges usual stories that focus on fully autonomous killing machines, instead pointing out AI’s main role in boosting intelligence and speeding up targeting. The theoretical framework for understanding AI military applications encompasses three primary domains: data processing and analysis, decision support systems, and autonomous execution capabilities. Research indicates that militaries worldwide have prioritized the first two domains, utilizing AI to accelerate and enhance military targeting through improved data fusion and pattern recognition, rather than complete automation of lethal decision-making. The theoretical understanding of the use of artificial intelligence in the military field includes three analytical axes, as pointed by Joshi (2018). Technical efficiency, which provides for algorithm development and system integration; operational doctrine, which addresses human-machine interaction mechanisms and usage strategies; and strategic impact, which focuses on the role of AI in enhancing deterrence and controlling escalation dynamics. This model provides a basic analytical framework for comparing different systems (King, 2024). AI platforms were evaluated across six dimensions—data integration, algorithmic architecture, human oversight, deployment scale, operational accuracy, and ethical compliance. For this evaluation, different technical Architecture mappings such as Neural network structures, supervised learning models, and fuzzy logic controllers were analyzed based on documented system schematics and open-source intelligence.
Empirical analysis of AI deployment in military contexts reveals diverse implementation strategies across different operational environments King’s research examined two significant cases: the British Army’s COVID-19 testing operation in Liverpool (2020) and the U S Security Assistance Group-Ukraine operations (2022), demonstrating how AI systems process multi-source data to optimize targeting and resource allocation (Emegha & Okoli, 2025).
These case studies identified the key principles when it comes to AI usage in the military sphere: the vital role of data quality and integration, the need of human agents to be present in the sphere of strategic decision-making as well as the possibility of the AI systems to greatly increase operational tempo and precision when properly utilized and guided by limits.
Recent scholarly analysis of Middle Eastern military AI development reveals distinct national approaches shaped by strategic priorities, technological capabilities, and resource constraints. Research on Israel’s AI military systems indicates a focus on defensive applications and precision targeting, driven by security imperatives and technological advantages (Journal, 2024). Conversely, Iranian military AI development follows an asymmetric strategy emphasizing cost-effective force multipliers and unconventional capabilities. Studies indicate Iran’s approach prioritizes “rapid deployment of novel technologies, often through unpredictable proxies” as a means of countering conventional military disadvantages (Rocks, 2021).
The debate about autonomous weapons systems in law and ethics has intensified due to the advancement of artificial intelligence. The International Committee of the Red Cross has articulated specific concerns regarding human control requirements and accountability mechanisms in AI-enabled military systems (Committee et al., 2019). It notes that no law can be delegated to a machine, computer program, or weapons system that would have any legal obligation to comply with international humanitarian law. As reported by the current research, the conflict between the operational utility provided by AI systems and the significant level of human control over lethal decision-making is emphasized. This conflict is especially sharp in high-tempo warfare, when the pace of AI-supported targeting may outpace human ability to monitor and take action. Recent legal analyses highlight the necessity of effective human control as a fundamental regulatory principle. However, the lack of consensus on practical definitions and implementation standards hinders the realization of this goal (ROFF, 2015). This regulatory ambiguity complicates the assessment of system compliance and obstructs efforts to develop appropriate governance frameworks.
This study adopts a comparative case study methodology that combines qualitative system analysis with quantitative performance assessment where verifiable data is available. The research design integrates three analytical approaches: (1) technical evaluation of the system based on documented capabilities and architectural specifications; (2) assessment of operational effectiveness using available performance metrics and deployment patterns; and (3) legal compliance analysis with international humanitarian law standards.
Data from conflict zones is highly biased and uncertain because of limited field access, the use of information for propaganda purposes, and the absence of the full truth on the ground. Bias may appear in the selection of sources when reports tend to adopt the narrative of one party over another, or when undocumented incidents are excluded from victim statistics. Governmental and non-governmental entities may also deliberately spread misleading information, leading to an exaggeration or underestimation of the accuracy of AI systems and their impact. To mitigate these issues, this study relies on a cross-checking methodology across multiple types of sources, including investigative journalism, open-source intelligence (OSINT) platforms, military data, and peer-reviewed studies. Priority is given to reports supported by verifiable evidence such as photographs, leaked documents, and confirmed eyewitness accounts. In the event of contradictions between sources, the controversial nature of the information was noted, and no definitive judgments were made based on a single source. The data collection process employed various source categories, ensuring that the ultimate coverage of data was traced through a wide array of sources to make it as comprehensive as possible. The principal categories of sources were as follows:
- Literature review and journal articles
- Authority documents of the government and the army
- The technically falsifiable journalistic accounts
- Analyses of open-source intelligence made by respected institutions of research
- The reliability of the sources was determined using the following main criteria:
- Trustworthiness of the publishing body and experience of the author
- Fidelity and reliability of a range of sources of independent origins
- Technical compatibility with known capabilities
- Transparency concerning data limitations and uncertainties
Systems were evaluated across six analytical dimensions to enable systematic comparison:
1. Data integration capabilities: Assessment of multi-source data fusion, sensor integration, and real-time processing capacity.
2. Algorithmic architecture: Analysis of machine learning approaches, neural network structures, and decision-making frameworks.
3. Human oversight mechanisms: Evaluation of human-machine interaction protocols, authorization requirements, and override capabilities.
4. Deployment scale: Documentation of operational deployment extent, target generation capacity, and engagement frequency.
5. Operational accuracy: Analysis of reported performance metrics, error rates, and validation methodologies.
6. Ethical and legal compliance: Assessment against International Humanitarian Law principles including distinction, proportionality, and precaution.
The Lavender system represents one of the most advanced applications of AI-driven target identification in modern warfare. Developed by the IDF’s elite Unit 8200, Lavender employs machine learning algorithms to analyze mass surveillance data of Gaza’s approximately 2.3 million residents, generating individual threat scores ranging from 1 to 100 based on behavioral patterns, communication networks, and movement patterns (Shabbir, 2024).
Technical architecture: Lavender is built on a multi-layered data fusion system that integrates visual surveillance, cellular communications, social media activity, and battlefield intelligence. The system uses supervised learning to identify patterns associated with militant profiles, analyzing features indicative of combatant status such as WhatsApp group memberships, frequency of address changes, and communications with known targets (Shabbir, 2024). The machine learning model was trained on historical data from confirmed combatants and refined through iterative validation processes.
Operational capabilities: During peak operational periods, Lavender generated approximately 37,000 potential human targets. According to IDF sources, the system achieved approximately 90% accuracy in target identification. System outputs feed directly into operational planning, with human operators spending an average of 20 seconds per target verification, primarily to confirm gender rather than to scrutinize targeting rationale or evaluate civilian risk factors.
Performance metrics: Unit 8200 internal validation studies suggested that Lavender could reach approximately 90 per cent accuracy in detecting valid military targets, but this has been challenged by external analysts who point to the statistical-based correlation method used by the system as opposed to verified intelligence. When the 10 percent error rate is used on 37,000 targets, it implies that there is a possibility of misidentifying thousands of people One clear indication of the inaccuracy of AI targeting is the high number of civilian casualties in Gaza. Moreover, the system does not account for whether targets are among civilians or near children at the time of engagement, which presents serious ethical and legal concerns regarding compliance with distinction and proportionality principles in International Humanitarian Law. Details regarding internal validation methodology remain limited, and whether actual operational accuracy matched reported figures remains unclear. Investigative reporting indicates that many targets were struck while at home (Patel, 2024). An automated tool called “Where’s Daddy?” was used in conjunction with Lavender to locate targets in family residences. According to anonymous Israeli intelligence officers, the IDF frequently chose to strike targets in their homes due to operational ease, as systems were optimized for such engagements.
4.1.2 Besorah: Accelerated target production
The Besorah system serves as a complementary platform to Lavender, focusing on infrastructure and high-value target identification through automated analysis of drone imagery, intercepted communications, and behavioral pattern recognition. According to technical documentation, Besorah can generate up to 100 prioritized targets daily, utilizing advanced computing and machine learning algorithms to evaluate target significance and strike feasibility (Gul et al., 2020).
Algorithmic approach: Besorah employs a traffic-light classification system (red/yellow/green) to prioritize targets based on military value, civilian risk assessment, and operational feasibility. The system integrates real-time intelligence feeds with historical pattern analysis to predict target behavior and optimal engagement windows. This predictive capability enables proactive targeting while theoretically maintaining consideration of civilian protection requirements.
Integration with human oversight: Unlike fully autonomous systems, Besorah maintains human command authority for final targeting decisions. Military commanders review system recommendations and must explicitly authorize strikes. However, compressed decision timelines—often minutes rather than hours—limit the depth of human analysis possible. This temporal constraint raises questions about the meaningfulness of human oversight when system-generated recommendations arrive at rates exceeding thorough human evaluation capacity.
4.1.3 SMASH family: Precision engagement systems
The SMASH (Smart Shooter) family represents AI-enhanced fire control systems designed for precision engagement across multiple platforms. These systems combine computer vision, predictive algorithms, and automated fire control to enhance accuracy and reduce collateral damage in complex operational environments (Soldier Systems Daily, n.d.).
Technical components: SMASH systems integrate several AI-driven capabilities:
• Target acquisition algorithm: Employs computer vision to identify and classify potential targets in real-time, distinguishing between combatants, civilians, and objects.
• Tracking algorithm: Continuously computes target trajectory while accounting for environmental variables including wind, humidity, and target velocity.
• Lock-and-fire control: Enables autonomous firing when target engagement probability exceeds predetermined thresholds.
Table 1 presents the technical specifications and key features of the SMASH family components.
4.1.4 Iron Dome: Predictive defense architecture
The Iron Dome system exemplifies defensive AI application, employing artificial intelligence to predict incoming projectile trajectories and calculate optimal interception parameters. Radar data is analyzed by machine learning programs to distinguish between threatening and benign projectiles, with engagement resources allocated only to genuine threats.
Algorithmic components:
Algorithmic components are essential for enhancing threat response capabilities in modern autonomous defense systems:
1. Trajectory prediction models: Neural networks calculate projectile paths by integrating ballistic properties with dynamic environmental conditions
2. Threat assessment algorithms: AI-powered systems analyze impact probabilities and expected damage severity to prioritize engagement decisions
3. Interception optimization: Real-time algorithms determine optimal timing and targeting strategies for interceptor deployment, maximizing accuracy while minimizing collateral risks
Performance metrics: The Israeli military reports Iron Dome achieves approximately 90% interception rate for short-range rockets and mortars. The AI interface reportedly reduces time required for threat assessment and engagement by approximately 40% compared to previous systems, enabling faster response to evolving threats. The system’s selective engagement algorithm reduces operational costs by avoiding interception of projectiles predicted to land in uninhabited areas (Ramachandra, 2023).
System Limitations: Despite high performance, Iron Dome faces several constraints. Each interceptor missile costs approximately $40,000-$50,000, making sustained use financially burdensome during prolonged or frequent attacks. While highly optimized for short-range threats, the system demonstrates reduced effectiveness against long-range ballistic missiles or complex multi-directional assault patterns. The system can be overwhelmed by high-volume simultaneous rocket launches, potentially reducing interception coverage and effectiveness. The October 2023 Hamas assault and subsequent Iranian-Israeli exchanges demonstrated that Iron Dome’s interception range and effectiveness can be degraded under sustained, high-volume attack conditions (News., n.d.).
4.1.5 Specialized systems: IRIS and Goshawk
Recent advancements in autonomous robotics have led to specialized systems for reconnaissance and counter-drone operations:
IRIS reconnaissance robot: Features AI-driven surveillance capabilities in confined spaces through computer vision algorithms that identify personnel, objects, and explosive devices. The system is compact and lightweight (1.85 kg, 20 Ă— 23 Ă— 11 cm dimensions), with 200 m communication range, 5 km/h maximum speed, and 1 kg payload capacity. IRIS enables reconnaissance in urban environments where human entry presents elevated risk.
Goshawk autonomous interceptor: Serves as a fully autonomous counter-unmanned aerial vehicle (UAV) system. It includes day and night detection capabilities, non-lethal drone capture methods, and smart nest recovery systems, allowing interception without collateral damage or continuous human involvement. These platforms collectively indicate evolution toward integrated, precision-oriented robotic systems emphasizing scalability and operational adaptability.
4.2.1 Strategic asymmetric approach
Iran’s military AI development follows a fundamentally different paradigm from Israel’s precision-focused systems, emphasizing asymmetric capabilities designed to counter conventional military superiority through cost-effective force multipliers and unconventional deployment methods. This strategy reflects Iran’s resource constraints, technological limitations, and strategic doctrine of “forward defense” through proxy networks (Rocks, 2021). Figure 1 illustrates the organizational structure of Iranian cyber and information operations, demonstrating integration of AI capabilities across multiple operational domains.
Iranian military authorities explicitly describe AI as a force multiplier, analogous to their proxy network approach throughout the Middle East. Consistent with strategic analysis, Iran has mobilized significant numbers of highly educated computer engineers to pursue AI capabilities despite hardware and infrastructure limitations, concentrating on rapid deployment of basic autonomous capabilities rather than complex precision platforms (Mieses, Noelle Kerr, 2024).
4.2.2 Autonomous drone systems
Suicide drone technology: Iran has deployed suicide drones utilizing artificial intelligence for autonomous target detection and engagement through image processing capabilities and pattern recognition algorithms. The Iranian Army Ground Forces and Islamic Revolutionary Guard Corps (IRGC) have identified these systems as primary components of Iran’s autonomous weapons arsenal.
Swarm Intelligence: Iran has developed autonomous drone swarms featuring mother platforms controlling multiple smaller suicide drones. This system represents Iranian advancement in distributed autonomous control and coordination, though capabilities remain incompletely verified through independent technical assessment. The swarm architecture enables coordinated attacks capable of overwhelming air defenses through numerical advantage and distributed attack vectors, complicating defensive responses (Ramachandra, 2023). Table 2 summarizes the types of autonomous drone systems deployed by Iran, their AI integration levels, and operational status.
Figure 2 provides a visual representation of Iranian AI drone system capabilities across different operational parameters.
Comparative assessment of Iranian AI-enabled drone systems across autonomy level, range, payload capacity, AI sophistication, and operational readiness. Source: Author’s analysis based on available technical intelligence.
4.2.3 AI-Enhanced missile systems
The Mobin cruise missile system exemplifies Iran’s incorporation of artificial intelligence into long-range precision strike capabilities. This system utilizes Digital Scene Matching Area Correlation (DSMAC) guidance, an AI technology initially developed in the 1980s and subsequently refined for modern applications. This advancement facilitates autonomous navigation to predetermined targets without relying on external guidance systems, providing resilience against electronic warfare and GPS denial (Rocks, 2021). DSMAC technology enables the missile to compare real-time terrain imagery with stored reference images, making autonomous course corrections to reach designated targets. While representing relatively mature technology, DSMAC integration demonstrates Iran’s capacity to adapt existing AI approaches to overcome technological and resource constraints.
4.2.4 Cyber warfare integration
Iran has significantly integrated artificial intelligence into its cyber warfare capabilities, representing one of the most advanced applications of AI in its military arsenal. According to security analysis, Iran’s cyber operations exhibit “increasing sophistication and potentially more dangerous cyber capabilities” through AI integration (Emegha & Okoli, 2025).
AI-Enhanced cyber operations: Iranian cyber capabilities encompass advanced digital techniques designed to influence, deceive, and manipulate information spaces:
• Deepfake content creation: Production of highly realistic synthetic media for propaganda and influence operations
• Sophisticated phishing campaigns: AI-powered targeting of strategically significant individuals and institutions with personalized social engineering attacks
• Fake news platforms: Establishment of AI-curated disinformation networks targeting specific population segments
• AI-powered personas: Deployment of autonomous virtual characters conducting psychological operations and shaping public opinion trends on social media platforms (Gen et al., 2025)
These capabilities represent significant force multiplication in information warfare, enabling Iran to project influence and conduct operations despite conventional military disadvantages.
Despite technological developments, Iranian military AI capabilities remain constrained by structural and technical factors limiting system sophistication and operational effectiveness:
Hardware constraints: International sanctions severely limit Iran’s access to advanced semiconductors and high-performance computing hardware, adversely affecting AI processing sophistication and precision. While advanced military systems employ specialized processors, Iranian systems typically utilize commercial-grade components, resulting in reduced operational efficiency. Sanctions have also resulted in absence of unified monitoring and sensing infrastructure, hindering provision of sufficient and diverse training data for AI applications. This negatively affects efficiency of Iranian AI-supported targeting systems and their adaptability to varied operational contexts (Khorrami, 2024).
In contrast, Israeli systems benefit from extensive surveillance networks and sophisticated data environments, receiving technical support from leading technology companies, particularly in communications surveillance. These capabilities significantly enable collection of actionable intelligence on specific targets, enhancing effectiveness of AI-driven military operations (Various, n.d.).
Integration issues: Institutional fragmentation within Iranian military organization—divided among Revolutionary Guard, regular army, and proxy forces—generates substantial challenges in AI system harmonization and data exchange across entities. This fragmentation inhibits systems’ capacity to operate in integrated, efficient manner, reducing overall operational effectiveness and limiting strategic coordination capabilities.
4.4.1 Technological sophistication comparison
Figure 3 presents a comprehensive comparison of technological capabilities between Israeli and Iranian AI military systems across multiple dimensions.
Multi-dimensional comparison illustrating relative technological sophistication between Israeli and Iranian AI military systems across data processing, algorithmic complexity, sensor integration, human oversight, deployment scale, and operational accuracy. Source: Author’s comparative analysis based on documented system specifications.
The comparative analysis reveals substantial disparities in technological sophistication.
4.4.2 Strategic implementation approaches
Table 3 outlines fundamental differences between Israeli and Iranian approaches to AI military implementation.
Figure 4 compares operational effectiveness metrics between Israeli and Iranian AI targeting systems, highlighting performance differences in accuracy, response time, and deployment scale.
The technical application of AI in military targeting systems relies on several core algorithmic approaches, each presenting distinct advantages and limitations in operational contexts. Analysis of documented systems reveals three primary technical paradigms deployed across both Israeli and Iranian implementations.
5.1.1 Machine learning implementation strategies
Both countries employ supervised learning algorithms that learn from historical engagement data and target characteristics, though the scope and sophistication of implementations differ substantially.
Israeli systems (Lavender platform): Israeli systems demonstrate advanced feature extraction capacity across multi-modal data sources including visual reconnaissance, communication intercepts, and behavioral pattern analysis. System architecture enables real-time processing of diverse data streams, employing sophisticated correlation engines to provide accurate target identification and threat assessment. This multi-modal approach enables cross-validation of target identification across sensor types, increasing accuracy and reducing false positive rates.
The Lavender system specifically employs deep learning architectures trained on extensive datasets compiled from surveillance operations, creating predictive models that identify combatant-associated behavioral patterns. The system’s reliance on correlation rather than causation, however, introduces risks of misidentification when civilians exhibit behavioral patterns statistically associated with combatants.
Iranian systems: Iranian applications focus primarily on computer vision-based target recognition and autonomous navigation. Image processing emphasis reflects reliance on open-source computer vision libraries and necessity of operating with limited sensor diversity. To overcome sensor limitations, Iranian systems have enhanced image processing algorithms and pattern recognition capabilities, optimized for specific operational environments where systems deploy.
Iranian autonomous drones employ convolutional neural networks for real-time target identification from aerial imagery, with training datasets developed through operational experience and simulation. The systems demonstrate adaptation to resource constraints through algorithmic efficiency prioritization over computational complexity.
5.1.2 Neural network architecture and implementation
The Iron Dome system exemplifies advanced neural network applications in defensive military AI, demonstrating sophisticated real-time processing capabilities essential for ballistic threat interception. Table 4 provides detailed comparative analysis of AI targeting architectures, contrasting Israeli and Iranian implementations across critical technical dimensions.
Israeli neural network implementation: Iron Dome employs specialized convolutional neural networks optimized for rapid trajectory prediction from radar data. The network architecture processes radar returns in real-time, predicting impact points and calculating optimal interception trajectories within milliseconds. Training data encompasses thousands of actual engagement scenarios supplemented by extensive simulation data, enabling high prediction accuracy across diverse threat profiles.
Iranian neural network approaches: Iranian systems utilize simplified neural network architectures optimized for specific threat scenarios and computational resource constraints. Training data limitations are addressed through synthetic data generation and transfer learning approaches, leveraging publicly available datasets and simulation environments to develop baseline capabilities subsequently refined through operational experience.
5.1.3 Fuzzy logic systems and decision-making architecture
Fuzzy logic controllers provide advanced capabilities for managing uncertainty and incomplete information in military targeting systems. Israeli platforms particularly excel in this domain, implementing fuzzy logic engines for target prioritization and engagement decisions under uncertainty conditions.
Israeli fuzzy logic implementation: Israeli platforms employ advanced fuzzy logic controllers optimized for urban warfare environments where civilian-military distinction presents significant challenges. These systems enable nuanced decision-making when faced with incomplete sensor data, conflicting information, or ambiguous scenarios. The fuzzy logic model enables graduated responses rather than binary choices, which proves critical in urban settings with substantial civilian-military integration where binary classification systems would fail.
Fuzzy logic enables the system to assign probability scores to target classifications rather than definitive categorizations, incorporating uncertainty into decision recommendations presented to human operators. This approach theoretically enhances civilian protection by flagging ambiguous cases for heightened human scrutiny.
Iranian adaptive fuzzy logic: Iranian systems employ simplified fuzzy logic approaches oriented toward operational adaptively within resource-constrained environments. The emphasis is on resilient decision-making algorithms functioning effectively with limited computational resources while maintaining operational effectiveness against diverse threat profiles. Iranian implementations prioritize robustness and operational continuity over sophisticated uncertainty quantification.
As autonomy in weapon systems increases, the temporal and spatial distance between human operators and selected targets expands, effectively delegating life-and-death decisions to algorithms. This raises fundamental ethical questions about human responsibility in the use of lethal force and the moral dimensions of algorithmic warfare (Mauri, 2022).
Distinction principle: International Humanitarian Law requires clear distinction between combatants and civilians. AI systems must not undermine commanders’ capacity to make required legal determinations regarding target legitimacy. The AI-enhanced Lavender system sparked controversy due to reports that some military leaders treated civilian deaths as acceptable statistics rather than requiring case-by-case proportionality assessments.
According to reports, operational protocols permitted killing up to 100 civilians in bombing raids targeting senior Hamas or Islamic Jihad officials. The system was designed to conduct attacks when targets were inside homes at night, increasing likelihood of successful engagement but substantially increasing risks of killing family members and civilian neighbors.(G.Pascual, 2024). This approach raises serious questions about compliance with proportionality requirements under International Humanitarian Law.
A prominent example is the 2025 killing of journalist Anas al-Sharif in his tent. The Israeli army accused him of Hamas affiliation when bombing his shelter, killing five other journalists in the strike. This incident exemplifies concerns regarding protection of civilians and proper application of distinction principles in AI-assisted targeting.
Table 5 examines operational integration approaches of AI targeting systems, comparing Israeli and Iranian architectures in command structure, communication systems, and decision authority.
Proportionality principle: International Humanitarian Law requires that anticipated civilian harm not be excessive relative to concrete and direct military advantage expected from an attack. AI systems must enable commanders to make proportionality assessments, yet compressed decision timelines and high target generation rates may undermine meaningful proportionality analysis. When human operators spend only 20 seconds verifying targets, as reported for Lavender system operations, the capacity for meaningful proportionality assessment becomes questionable.
Precautionary principle: Parties to conflict must take all feasible precautions to minimize civilian harm. This includes verification that targets are military objectives, selection of means and methods minimizing civilian harm, and provision of advance warning when circumstances permit. AI systems that prioritize operational efficiency over verification thoroughness may conflict with precautionary obligations, particularly when strikes are conducted in civilian-dense environments or against targets in family residences.
Command responsibility: Military commanders remain legally accountable for targeting decisions regardless of AI system involvement. However, the complexity and opacity of AI algorithms may prevent commanders from fully comprehending and validating system recommendations, potentially undermining effective command and control. When AI systems generate targets at rates exceeding human capacity for thorough review, command responsibility becomes difficult to exercise meaningfully.
Algorithmic transparency: The “black box” problem of many machine learning systems creates accountability and legal scrutiny challenges. Targeting systems must be capable of legal review consistent with Article 36 of Additional Protocol I to the Geneva Conventions, which requires legal review of new weapons, means, and methods of warfare. When algorithms cannot explain their targeting rationale in human-interpretable terms, such legal review becomes problematic. Table 6 summarizes international legal and ethical frameworks governing AI-based weapon systems, including compliance requirements for military applications.
Criminal liability framework: Under international criminal law, programmers may be held criminally liable when designing systems intended to violate International Humanitarian Law, and commanders may face criminal liability when deploying systems incapable of lawful operation. However, proving intent and establishing causal links between algorithmic decisions and unlawful outcomes presents significant evidentiary challenges. The distributed nature of AI system development—involving multiple programmers, data scientists, and military planners—further complicates attribution of individual criminal responsibility. As shown in Figure 5.
Positive target identification: Effective civilian protection requires reliable identification of lawful military targets and distinction between military objectives and protected persons and objects. Current AI systems demonstrate varying capabilities in this domain, with reported error rates of 5-10% raising concerns about accidental civilian casualties. Even systems achieving 90% accuracy imply substantial misidentification risks when applied at scale.
Temporal and spatial contexts: AI systems must account for warfare’s dynamic nature, which introduces factors affecting target legitimacy such as combatants who have laid down arms, hors de combat status, or presence in protected locations like hospitals. Static algorithmic models may not adequately reflect these dynamic legal and ethical considerations. Training data reflecting historical patterns may not capture contextual factors determining lawfulness of engagement in specific circumstances.
Collateral damage estimation: Targeting systems incorporate predictive models attempting to estimate civilian casualties from prospective attacks. However, such models rely on assumptions regarding building occupancy, civilian behavior patterns, and weapon effects that may not accurately reflect operational realities, potentially producing inaccurate risk assessments. The “Where’s Daddy?” system’s focus on striking targets in family homes exemplifies prioritization of operational convenience over civilian protection, despite predictable presence of family members and neighbors ( Israel-Gaza war death toll: Live tracker, 2025).
Deployment of AI-powered targeting systems in the Iranian-Israeli context introduces novel escalation dynamics that differ significantly from conventional approaches to strategic stability. The automation of target generation and engagement compresses decision-making timelines, potentially reducing opportunities for diplomatic intervention and crisis management. When AI systems identify and prioritize targets within minutes or hours rather than days, windows for de-escalation narrow substantially.
The speed advantage provided by AI systems creates incentives for preemptive action during crises, as states may perceive advantages in striking before adversaries fully mobilize AI-enhanced capabilities. This dynamic introduces crisis instability, where both parties face pressures to act quickly rather than pursue diplomatic resolution. The proliferation of AI military systems without accompanying arms control frameworks risks accelerating regional arms races and increasing conflict probability.
Reported error rates in AI targeting systems (5-10% for most documented systems) translate to substantial civilian casualty risks when applied at scale. The Lavender system’s generation of 37,000 targets with a reported 10% error rate implies potential misidentification of approximately 3,700 individuals. This has been substantiated through increasing civilian and child casualties in Gaza during recent conflicts ( Israel-Gaza war death toll: Live tracker, 2025).
According to humanitarian affairs reports and local health authorities, densely populated areas and critical infrastructure including water distribution facilities and residential buildings were repeatedly targeted. This resulted in mass civilian casualties that contradict stated precision targeting objectives. The magnitude of civilian harm raises serious questions about actual operational compliance with International Humanitarian Law principles, despite technological sophistication of deployed systems.
The disconnect between claimed precision capabilities and observed humanitarian outcomes suggests either systematic failures in AI targeting accuracy, inadequate human oversight mechanisms, or operational decisions prioritizing military objectives over civilian protection requirements. Regardless of cause, humanitarian impact assessment indicates urgent need for enhanced accountability mechanisms and independent verification of AI system performance.
Table 7 presents a strategic roadmap for establishing ethical and legal oversight of AI military systems, organized from immediate measures to long-term governance frameworks.
The integration of artificial intelligence in military targeting systems represents a fundamental transformation in warfare, with implications extending beyond operational effectiveness to strategic stability, international law, and humanitarian protection. Analysis of the Iranian-Israeli conflict demonstrates both the potential and peril of algorithmic warfare, underscoring urgent need for international governance frameworks capable of managing this technology’s trajectory.
The comparison reveals substantial divergences in technological sophistication, strategic implementation, and operational doctrine. Israeli systems demonstrate advanced data integration, precision targeting capabilities, and sophisticated human-machine collaboration architectures. However, concerns emerge regarding compressed human oversight timelines, high-volume target generation potentially exceeding meaningful human review capacity, and operational decisions that appear to prioritize efficiency over civilian protection. Iranian systems reflect asymmetric strategic approaches emphasizing cost-effectiveness, distributed autonomous capabilities, and cyber warfare integration. Resource constraints and technological limitations shape Iranian implementations, resulting in systems prioritizing operational resilience over precision.
Both approaches raise distinct governance challenges. Israeli precision systems demonstrate that even technologically sophisticated AI implementations may fail to ensure adequate civilian protection when operational pressures compress decision timelines and systematic target generation outpaces human oversight capacity. Iranian asymmetric approaches demonstrate that AI military capabilities are accessible even to resource-constrained states, suggesting inevitable proliferation regardless of international control efforts.
Recent Israeli military operations in Gaza highlight the critical gap between claimed precision capabilities and humanitarian outcomes. While official accounts emphasize accurate targeting, unprecedented civilian casualties—particularly among children—substantially exceed losses in previous conflicts. According to humanitarian organizations and local health authorities, attacks on densely populated areas and civilian infrastructure resulted in mass casualties contradicting distinction and proportionality principles. This stark contradiction between stated intent and operational results underscores the necessity of international oversight mechanisms capable of independently verifying adherence to International Humanitarian Law principles.
The international community faces a narrow opportunity window to develop norms, standards, and legal frameworks governing AI military applications while still enabling legitimate defensive capabilities. Decisions made today will largely determine whether AI contributes to international security and humanitarian protection or undermines both in coming decades.
The regulated nature of Israeli systems, emphasizing precision and human oversight in doctrine, suggests potential pathways for responsible AI military development. However, substantial work remains in ensuring that compressed decision timelines do not undermine meaningful human control and that civilian protection maintains priority over operational efficiency. Iran’s asymmetric model demonstrates that AI military capabilities will proliferate regardless of resource constraints, emphasizing urgency of establishing international governance mechanisms before widespread diffusion renders control impractical.
Ultimately, the challenge is not preventing AI military system development—which appears inevitable—but ensuring development and deployment occur in ways preserving meaningful human control, respecting civilian population protection, and maintaining strategic stability. This represents an unprecedented challenge requiring collaboration among technologists, policymakers, legal experts, and the international community. The success or failure of these efforts will fundamentally shape warfare’s future character and humanity’s capacity to maintain ethical and legal constraints on armed conflict in the algorithmic age.
The datasets supporting the conclusions of this article have been deposited in the Zenodo repository and are publicly accessible at: https://doi.org/10.5281/zenodo.17385265 (Faris, 2025).
The repository contains:
• Structured data extraction tables documenting technical specifications, operational capabilities, and performance metrics of AI targeting systems deployed by Israeli and Iranian military forces
• Systematic evaluation matrices assessing systems across six dimensions: data integration, algorithmic architecture, human oversight mechanisms, deployment scale, operational accuracy, and ethical compliance
• International Humanitarian Law compliance assessment frameworks evaluating adherence to principles of distinction, proportionality, precautionary measures, meaningful human control, and accountability
• Complete source documentation index with reliability assessments and citation trail
• Methodological codebooks explaining evaluation criteria and scoring methodologies
All supplementary data are released under a CC-BY 4.0 license.
This study employs a secondary data analysis approach, systematically reviewing and synthesizing publicly available information from peer-reviewed academic journals, verified investigative journalism, open-source intelligence platforms, technical documentation, and international legal frameworks. No primary data collection, experimental research, or work with human participants was conducted. All source materials are comprehensively cited in the manuscript’s References section and remain accessible through their original publishers.
For data-related queries, contact Den.lamia.faris@uoanbar.edu.iq.
| Views | Downloads | |
|---|---|---|
| F1000Research | - | - | 
| 
                                            PubMed Central
                                             Data from PMC are received and updated monthly. 
                                             | 
                                        - | - | 
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)