Israel’s Targeting AI: How Capable is It?


Laid waste: the aftermath of an an Israeli strike on north Gaza in November 2023. Image: Eddie Gerald / Alamy


The ‘Habsora’ AI system used by the Israeli military is said to use intelligence data to generate targets for attack, including reports on the likely number of civilian casualties. But the odds of even the Israel Defense Forces using an AI with such a degree of sophistication and autonomy are low.

In 2021, Israel claimed to have used AI in its brief conflict against militant groups in the Gaza Strip, sparking headlines about the first ‘AI War’. Recent reports claim much more intensive usage of AI-powered systems in Israel’s current war on Gaza, although details about these systems remain scarce due to the highly classified nature of Israeli intelligence and the layers of mis- and disinformation surrounding the ongoing war. Despite the dearth of information, there is reason to inject a note of caution into discussions of both the alleged capabilities and the role of Israel’s military AI.

Habsora: AI Targeting?

In 2019, the Israeli government announced the creation of a ‘targeting directorate’ to produce targets for the Israeli Defense Force (IDF), especially the Israeli Air Force (IAF). In previous conflicts, the IAF would run out of targets after just a few weeks of fighting, having hit all the targets of which they knew. The targeting directorate was created to mitigate this shortage by preemptively creating a ‘bank’ of militant targets prior to any conflict, thereby ensuring enough targets when hostilities began. The directorate, consisting of hundreds of soldiers and analysts, creates targets by aggregating data from a variety of sources – drone footage, intercepted communications, surveillance data, open source information, and data from monitoring the movements and behaviour of both individuals and large groups.

Both media and IDF sources claim that AI is used by the targeting directorate to process the aggregated data and then generate targets at a much higher speed than human analysts can. The AI system used (dubbed ‘Habsora’ (בשורה), or ‘Gospel’) is said to use the intelligence collected to output actionable targets of militants’ locations for brigade- or division-level targeting. The reporting claims that the target outputs include reports on the likely number of civilian casualties. Media sources claim that the human analyst’s role is confined to confirming the target before it is given to the commander for rejection or approval. The IDF states that the goal is a ‘complete match’ between Habsora’s and a human analyst’s targeting recommendation prior to a strike.

Alternative View

The IDF is one of the most technologically advanced and integrated militaries in the world, yet the odds of even the IDF using an AI with such a degree of sophistication and autonomy are low.

quote
Despite the dearth of information, there is reason to inject a note of caution into discussions of both the alleged capabilities and the role of Israel’s military AI

Although ‘creating targets’ might sound like a simple concept, targeting is an immensely complex task, meaning that Habsora would be miles beyond any other tactical/operational system deployed by militaries around the world. The AI would need to be capable of accepting a variety of data formats from numerous sources, weighing the relevance and reliability of each data point, combining this data with existing records, and creating an actionable target profile, while allegedly also estimating civilian collateral damage.

While such capability is may be technically feasible, there is an even lower likelihood of such a system being allowed such a broad remit in a combat environment, because trust in AI, especially military AI, remains lacking. A system like Habsora, with its reported lack of output detailing reasons for target selection, is unlikely to reduce this apprehensiveness.

Even in the less autonomous, but still vague, role for Habsora described by the IDF, no description is given of how the system creates targets, or what happens when the two do not align. Commanders would likely desire significant cross-checking by several human sources before confirming a strike, which would occupy many of the directorate’s human analysts, thus negating at least some of the efficiency gained by deploying AI.

quote
Every decision to strike is made with near-comprehensive knowledge of conditions in the target location and anticipated effects of the strike, including anticipated casualties

A much more likely scenario is that the targeting directorate uses machine-automated systems in cohering intelligence to identify patterns in the massive quantities of data collected, with humans creating the actual target. Even in a secondary role such as this, AI-enhanced processing would undoubtedly increase the speed at which the directorate produces targets, relative to analysts doing this by hand. Using a system to amass and automatically process collected intelligence is not new; such computing has been used in targeting for decades, albeit with varying technical capability. Upon receiving intelligence (which could be anything from geospatial to signals intelligence), a machine-automated system could indicate areas of interest where further attention or action might be merited, but not output actionable targets.

Implications

However, the absence of explicit AI targeting does not mean that Israel’s aerial war on Gaza is imprecise, or that it is unable to avoid civilian casualties in its strikes. Despite its extensive use of ‘dumb’ munitions, Israel has access to massive quantities of precision munitions. These precision munitions are available with a variety of warheads and capabilities, allowing the IDF to limit or expand the level of destruction at will, and indicating that the majority of civilian collateral damage from air strikes is intentional and accounted for.

The IDF has maintained an extraordinarily dense surveillance network over Gaza for many years, and retains absolute supremacy in electronic, communications, geospatial, and measurement and signature intelligence. Every decision to strike is made with near-comprehensive knowledge of conditions in the target location and anticipated effects of the strike, including anticipated casualties. Israel also has a rigorous legal core within its military, whose lawyers must sign off on every target – human- or AI-generated – even if there are hundreds in a single day. When Israeli air strikes kill or injure tens of thousands of civilians, it seems beyond any reasonable doubt that every single target is generated, approved, ordered and struck with the full knowledge and consent of human IDF operators.

The views expressed in this Commentary are the author’s, and do not represent those of RUSI or any other institution.

Have an idea for a Commentary you’d like to write for us? Send a short pitch to commentaries@rusi.org and we’ll get back to you if it fits into our research interests. Full guidelines for contributors can be found here.


WRITTEN BY

Noah Sylvia

Research Analyst for C4ISR

Military Sciences

View profile


Footnotes


Explore our related content