RUSI JournalVOLUME 171ISSUE 1

The Misguided Effort to Regulate Military AI: No New IHL Needed

The existing international law is technology-neutral and sufficient to regulate military uses of narrow AI in a way that maintains human accountability

The existing international law is technology-neutral and sufficient to regulate military uses of narrow AI in a way that maintains human accountability. Courtesy of Imago/Alamy


Keith Dear and Magdalena Pacholska argue that humans are responsible and accountable when delegating the use of lethal force to machines and these actions are regulated by existing international humanitarian law.

There is no need for new laws or guiding principles. International discussion on lethal autonomous weapons systems and AI decision support systems are unnecessary. The real concern is artificial general intelligence and jus ad bellum, which should be the focus of UN and international discussions, specifically on AI’s military and security applications and effects.

unlockedThis content is available to you

Read the full text on Taylor & Francis

This link will open in a new window


Taylor and Francis publishes the RUSI Journal and Whitehall Papers on behalf of RUSI



Footnotes


Explore our related content