Comment

Government must act fast so police can use AI without undermining public trust

Police forces nationwide have developed increasingly sophisticated algorithms
Police forces nationwide have developed increasingly sophisticated algorithms Credit: PA

Public attention is increasingly focussed on the regulation of police technology. Recent debate has centred around live facial recognition (LFR), following the Met Police’s decision to deploy LFR technology on the streets of London. Proponents argue that LFR will enhance the police’s ability to detect and prevent crime, by enabling officers to more efficiently locate wanted individuals. Privacy campaigners argue that these technologies present a fundamental threat to citizens’ human rights, and pressure is growing for legislative reform to regulate police use of LFR and other data-driven technologies.

But facial recognition is only the tip of the iceberg. In recent years, police forces nationwide have developed increasingly sophisticated algorithms, often using artificial intelligence (AI), to support operational decision-making for a range of purposes. This includes the use of machine learning to assess risk of re-offending, forecast demand in control centres, prioritise crimes according to their ‘solveability’, and predict locations where crime is most likely to happen in the future. In the context of significant police cuts since 2010, coupled with ever-expanding data volumes, algorithms are seen as a valuable tool to allocate limited resources most efficiently based on a data-driven assessment of risk and demand.

When used appropriately, algorithms undoubtedly have the potential to assist police forces in deploying limited resources to where they are most needed, in turn improving the overall level of service offered to the public. But they are not without risks, and use of AI raises various additional concerns which are not accounted for within the existing framework. Critics highlight in particular the risk of bias in algorithmic predictions and the ‘black box’ nature of certain machine learning methods, issues which are discussed in detail in the RUSI report published today.

First and foremost, the police have a legal and societal duty to protect the public from threats to their safety, and adopt new methods that may allow them to do this more effectively. A reluctance to innovate and adapt for the digital age could be viewed as a failure to fulfil this duty. Conversely, the public expects the police to adopt new methods in a way that gives citizens reassurances that their rights are respected. Achieving this balance is a major challenge at a time of such considerable technological change. This challenge is compounded by a lack of any official national guidelines for police use of algorithms, a gap which senior police officers suggest should be addressed as a matter of urgency.  

In the absence of any primary legislation explicitly regulating police use of algorithms, it is essential to develop publicly available guidance, clearly describing the types of technology the police are developing and the circumstances in which they will be used. The Home Office has a crucial role to play in this regard, but has so far failed to develop any national policy which the police can implement in practice. The Centre for Data Ethics and Innovation is also well placed to inform wider government policy-making regarding use of AI in the public sector, but the Centre’s purpose and statutory function will need to be more clearly defined if it is to play a meaningful role.

The government must act fast to develop clear, evidence-based policy to ensure the police can take full advantage of the opportunities offered by these powerful new technologies, without compromising societal and ethical values or undermining public trust.

Alexander Babuta is a research fellow in national security studies at RUSI

License this content