You are here
- The use of data analytics and algorithms for policing has numerous potential benefits, but also carries significant risks, including those relating to bias. This could include unfair discrimination on the grounds of protected characteristics, real or apparent skewing of the decision-making process, or outcomes and processes which are systematically less fair to individuals within a particular group. These risks could arise at various stages in the project lifecycle.
- Algorithmic fairness cannot be understood solely as a matter of data bias, but requires careful consideration of the wider operational, organisational and legal context, as well as the overall decision-making process informed by the analytics.
- While various legal frameworks and codes of practice are relevant to the police’s use of analytics, the underlying legal basis for use must be considered in parallel to the development of policy and regulation. Moreover, there remains a lack of organisational guidelines or clear processes for scrutiny, regulation and enforcement. This should be addressed as part of a new draft code of practice, which should specify clear responsibilities for policing bodies regarding scrutiny, regulation and enforcement of these new standards.
Alexander Babuta is a Research Fellow in National Security Studies at RUSI. He leads the Institute’s research on policing, intelligence and surveillance, with a focus on the use of emerging technologies for security purposes.
Marion Oswald is the Vice-Chancellor’s Senior Fellow in Law at the University of Northumbria, an Associate Fellow of RUSI and a solicitor (non-practising). She is Chair of the West Midlands Police and Crime Commissioner and West Midlands Police Ethics Committee, a member of the National Statistician’s Data Ethics Advisory Committee and an executive member of the British and Irish Law, Education and Technology Association.
The views expressed in this publication are those of the authors, and do not reflect the views of RUSI or any other institution.