Artificial intelligence (AI) and autonomy continue to hit the headlines of major media outlets, yet few of these deal with the specific considerations of military use: what does the arrival of such systems mean when they are designed to kill people? The moral and ethical concerns of such activity simply do not align with the current discussions that relate to future individual privacy, monitoring, surveillance or to those about robots replacing people in work. Designing a system to take life, giving it the means to do so, and then allowing such activity to take place without a human ‘in’ or ‘on’ the decision loop is an evolution of warfare that is happening, even if the West is in denial.
This was not just another conversation about ‘killer robots.’ Lethal AI and Autonomy sought to critically examine where technology is taking us, the moral and ethical challenges that such advances will pose to military forces, and the practical considerations of future systems. It addressed not only how the UK and its allies view this area, but also rival and adversarial perspectives.