Defence AI Beyond the Headlines

Concept image of an eye with crosshairs overlaid on the iris.

Target acquisition: The uses of AI in the military are varied. Image: Lukáš Gojda / Alamy Stock


A misunderstanding of AI – as used in US air strikes against Iran – obscures deeper questions about the pace modern militaries are trading deliberation for speed.

Amid a frenetic media cycle, one theme has stubbornly persisted: the role of Artificial Intelligence (AI) in defence. Claims of over 1,000 targets struck by the US military in the first 24 hours of the war in Iran exposed a military tempo and feat unachievable by humans alone. Questions, rumours and misunderstandings about defence AI took on a new life. In this piece we highlight the complexities behind these headlines.

Before the AI Boom: Decades of Military Algorithms

AI adoption is now intensifying across unclassified and classified defence systems, transforming the pace at which decisions in the military can be made. Defence and intelligence estates continue to grapple with ways to synthesise their vast amounts of disparate data and ultimately improve intelligence collection, analysis and dissemination cycles, in order to gain some form of advantage over adversaries.

Yet while the underlying principles are certainly not a new phenomenon in US and Israeli military architectures, the proliferation of these toolsets across the end-to-end defence estate deserves more attention. As AI tools become endemic across enterprise IT and operational mission systems, they become normalised and carry the risk of trading out human consideration for a need of speed.

Demystifying AI Usage in Targeting Cycles

Much of the media attention has centred around the US military’s use of AI for targeting in the war in Iran. According to reporting, Anthropic’s Claude was used in Palantir’s Maven Smart System (MSS) to aid in US targeting, which has involved striking thousands of targets in a matter of days. The claim immediately sparked furore, with parallels quickly drawn to the Israeli use of AI for targeting in their genocidal campaign on Gaza. 

Since the Algorithmic Cross Warfare Team (better known as Project Maven and infamous for the 2018 Google walkouts), first sought to integrate AI into DoD workflows nearly a decade ago, AI has become integral in digitally modernising targeting tasks not only in the US, but across militaries like those of ChinaRussiaUkraine and the UK.

What appears to be novel is the scale and pace of joint US and Israeli operations in Iran. The US has not previously conducted precision strikes at this rate in its modern history – a feat seemingly enabled by its widespread adoption of AI in its digitalised systems.

quote
Irrespective of tight guardrails in place, this inter-dependence of humans and machines operating at differing paces carries risk of technology driving human decision-making rather than enabling it

As discussed in detail elsewhere, algorithms do not replace humans entirely by simply taking in all the raw data and outputting a finalised, approved target profile. Instead, external data models are integrated into Palantir’s MSS, which is itself hosted by Amazon Web Services. The MSS is a platform that integrates data from a wide variety of sources into a ‘single pane of glass’, hosting a number of tools to enable decision-making workflows, including targeting. Many of these tools are AI-enabled and used to augment or perform specific tasks within the targeting cycle, to include:

  • Data and intelligence fusion.
  • Spotting, identifying, classifying and tracking of objects and people of interest.
  • Course of action (COA) creation, modelling, testing, iteration and prioritisation.
  • Intelligence or COA visualisation.
  • Cueing, tasking, matching and synchronising battlefield assets and fires.
  • Collateral damage estimates.
  • Post-strike assessments.

The level of machine automation differs between these tasks, with several Western militaries enshrining ‘appropriate’ or ‘context-dependent’ standards of necessary human involvement. The flexibility in these terms focuses on human judgement rather than mandating strict human control over every battlefield action. However, automation does not remove accountability, for even if a target is struck without an explicit human decision, there must be commanders and analysts who are held accountable for any mistakes or errors resulting in civilian harm.

For example, the US strike on the Minab school that killed nearly 200 civilians appears to be due to faulty historical data. This would constitute a human – not machine – error, caused by a lack of diligence in target intelligence cycles, and likely exacerbated by the gutting of civilian harm mitigation authorities. Various AI-enabled tools were almost certainly involved in the tracking, modelling and/or weaponeering for this target, but it will be people – commanders, analysts and technicians – who must be held to account both legally and in public discourse. 

Enjoy our analysis and research? Ensure it shows up first on Google

Help your search results show more from RUSI. Adding RUSI as a preferred source on Google means our analysis appears more prominently.

Given the sensitivity and both technical and operational complexity of the targeting cycle, the degree of human oversight involved for the AI-enabled functions in Iran remains unclear. Yet even if urgency dictates a selection of targets under pressure, we must hold the Department of War to account for how their internal processes are adapting to AI deployment. Internal accountability mechanisms must remain robust and transparent despite the difficulty of attribution for rapid, complex and often automated tasks. Models and workflows must be continually refined and tested to integrate lessons learned and mitigate repeating errors. If proportionality assessments are automated, senior risk owners must understand their accuracy and how their operators are prioritising reducing civilian harm. More broadly, modern militaries must work to ensure that operational efficiency and tactical impact do not serve as a replacement for deliberate, lawful strategy.

Beyond Targeting: The Realities of Defence AI Deployment

AI-enabled targeting has occupied the oxygen in the room. This mono-focus risks overlooking other deployments in defence and obscures the quieter ways in which US military operations are being reshaped in planning, co-ordination and execution. To understand AI’s impact on the defence sector, the debate should dissect how various forms of AI toolsets – predictive, generative, agentic and others – are used differently across military applications and should question where and why guardrails might diverge.

  • Logistics. Supporting functions are often cited to be the primary use cases of defence AI, ranging from enhanced supply chains for co-ordinating faster fuel and ammunition delivery, to predictive algorithms optimising military equipment maintenance schedules. The US Defense Logistics Agency cites over 200 use cases and 55 AI models in different stages of deployment across its operations. It is easy to equate these cases to more mundane and ubiquitous productivity technologies and in doing so risk limiting the public debate over the impact they have in shaping military operations.
  • Training and Exercises. Varying in sophistication, the US military employs advanced AI toolsets to train personnel where operational scenario permutations are created at superhuman speed to anticipate real-life scenarios. The importance of which surfaces in the US’ 2026 AI Strategy for DoD, indicating that if exercises do not incorporate AI they are subject to review by programme cost directors.
  • Translation and Linguistics. In more obscure cases, AI in Natural Language Processing (NLP) systems might be used to sift through and synthesise large volumes of data – such as intercepted communications – across multiple language sources, again at pace. Such linguistics synthesis could offer efficiencies to human translators, but challenges of disparate and sparse data points in classified environments persist across defence for operationalising AI in these contexts.
  • Cyber. In cyberspace, AI could be called up from the production bench. With potential to support defensive cyber operations across the US federal and national government systems and in security operating centres, the intersection of AI used in cyber defence merits closer attention.
Subscribe to the Cyber & Tech Newsletter

Stay up to date with the latest publications and events from the Cyber and Tech Research Group

Subscribe to the Military Sciences Newsletter

Stay up to date with the latest publications and events from the Military Sciences Research Group

The distinction of predictive versus generative models is equally important to consider. Use cases that extend beyond predictive AI analysis to support logistics and maintenance schedules, towards generative AI models, could create further risks and ‘might have a poor understanding of nuanced military policy’. Although the US military has expended much effort into ethical considerations in the use of AI, challenges remain for military personnel becoming overly reliant on these models. With decision-making cycles for targeting being compressed in the war in Iran, does the relative speed of humans to deliberate decisions remain unchanged? Irrespective of tight guardrails in place, this inter-dependence of humans and machines operating at differing paces carries risk of technology driving human decision-making rather than enabling it.

To Deploy or To Divorce: A Tech Vendor’s Classified Nightmare

The breadth in these use cases illustrates the embeddedness of AI toolsets across the end-to-end US military and intelligence estates. Although the US DoD has invested to diversify its commercial vendor pool, having contracted $800M across OpenAI, Anthropic, Google and X-AI in 2025, the ability to deploy sophisticated data modelling in classified environments goes beyond software programming and requires an understanding of military nuances – which remains a valuable commodity in this space.

Despite highly classified environments being shrouded in secrecy, disputes on the future deployment - or disentanglement - of AI models behind closed doors are becoming increasingly public. A recent letter by Google employees has urged its CEO to ‘reject any classified workloads’ from the US government. Elsewhere, Anthropic has been the first to deploy within classified systems and its divorce proceedings with the DoD continue to be as complex as expected, with a disentanglement of assets, personnel and knowledge required, rather than the rhetoric of simply ‘unplugging’ Claude AI from US defence. Across the Atlantic, conversations must now accelerate on the over-reliance on US tech vendors in sensitive systems; utilising insights gained from these loud proceedings of US tech vendors operating in sensitive military and intelligence systems.

Across the Pond: Implications for European Partners

The factual assessment of the use of AI in defence may only play a secondary role for forming the wider European public perception on the topic. Instead, public perception may be steered by wider, attention-grabbing media reporting on the use of AI in warfare. Given the already low approval rating by Europeans for the US Israeli attack, this may be another push for more independence from US tech and defence policy.

Subscribe to the RUSI Newsletter

Get a weekly round-up of the latest commentary and research straight into your inbox.

Accelerated by a strained relationship with the second Trump administration, European allies are already knee deep into pursuits for European tech sovereignty, including for AI applications in defence. But the growth of European companies offering alternatives to US defence tech providers is not just a confidence question but direct economic competition – especially where not being American is seen as a selling point.

If public perception around the use of AI in the Iran conflict is further associated with civilian casualties, a war that many see as contrary to international law, and where even the closest US allies, like the UK, are reluctant to fully support its activities, this may be yet another boost for the European defence tech ecosystem. Whether European AI capabilities would actually be able to target with greater distinction or whether European military commanders would use them differently and in line with protocol, doctrine and an ethical understanding that significantly varies from those of US American and Israeli counterparts is, however, a different question.

© RUSI, 2026.

The views expressed in this Commentary are the authors', and do not represent those of RUSI or any other institution.

For terms of use, see Website Terms and Conditions of Use.

Have an idea for a Commentary you'd like to write for us? Send a short pitch to commentaries@rusi.org and we'll get back to you if it fits into our research interests. View full guidelines for contributors.


WRITTEN BY

Dr Pia Hüsch

Research Fellow

Cyber and Tech

View profile

Prerana Joshi

Research Fellow

Cyber and Tech

View profile

Noah Sylvia

Research Fellow, C4ISR and Emerging Tech

Military Sciences

View profile


Footnotes


Explore our related content