"The AI model analyses a lot of the information"
Artificial intelligence is having a growing impact in numerous sectors. This includes police forces around the world using it to tackle crime.
However, there are both benefits and risks.
For example, a domestic abuse victim is on the phone to a 999 emergency call handler.
While she is talking to a human, the call is also being transcribed by an AI software system, one that links directly to UK police databases.
When she provides her husband’s name and date of birth, the AI retrieves his details. It reveals that the man has a gun licence, meaning police officers need to get to the home as soon as possible.
The example is part of a three-month trial of AI emergency call software in 2023 by Humberside Police.
The AI was provided by UK start-up Untrite AI and is designed to improve the efficiency of the thousands of calls received each day.
The system was trained on two years worth of historical data – all related to domestic abuse calls – provided by Humberside.
Kamila Hankiewicz, chief executive and co-founder of Untrite, said:
“We set out to build an assistant for operators to make their jobs slightly easier because it is a high-stress and time-sensitive environment.
“The AI model analyses a lot of the information, the transcript and the audio of the call, and produces a triaging score, which could be low, medium or high.
“A high score means that there has to be a police officer at the scene within five or 10 minutes.”
According to Untrite, the trial indicates the software’s potential to save operators nearly a third of their time, both during and after each call.
Other tech companies, including US-based Corti and Carbyne, are also now offering AI-powered emergency call software systems.
The next step for Untrite involves deploying its AI in a live environment, and the company is currently in discussions with several police forces and other emergency services to facilitate this.
AI holds the promise of revolutionising the way police investigate and tackle crimes by identifying patterns and links in evidence and rapidly processing vast amounts of data.
However, there have been notable challenges in the deployment of this technology by law enforcement.
Instances of AI-powered facial recognition software inaccurately identifying black faces were widely reported in the US in 2023.
Despite concerns, some US cities like San Francisco and Seattle have already banned the use of such technology.
Nevertheless, police forces in the UK and USA are increasingly incorporating AI into their operations.
However, Albert Cahn, executive director of the anti-surveillance pressure group Surveillance Technology Oversight Project (Stop), is not happy with the development.
He said: “We’ve seen a massive investment in, and use of, facial recognition despite evidence that it discriminates against black, Latino and Asian individuals, particularly black women.”
This technology can be used in three main ways:
- Live facial recognition, which compares a live camera feed of faces against a predetermined watchlist.
- Retrospective facial recognition, which compares still images of faces against an image database.
- Operator-initiated facial recognition, in which an officer takes a photograph of a suspect, and submits it for a search against an image database.
In October 2023, the UK’s Policing Minister Chris Philp said police forces should double the number of searches they make using facial recognition technology over the next year.
In 2023, the UK’s National Physical Laboratory (NPL) conducted independent testing on three facial recognition technologies, all utilised by the Metropolitan and South Wales police forces.
As the official UK entity for establishing measurement standards, the NPL found that accuracy levels had significantly improved in the latest software versions.
However, the NPL also observed that in certain instances, the technology was more prone to providing false positive identifications for black faces compared to white or Asian faces, a disparity deemed “statistically significant” by the NPL.
While it is encouraging that independent tests are being conducted, West Midlands Police has taken additional steps by establishing its own ethics committee to assess new technological tools.
Comprising data scientists and chaired by Professor Marion Oswald, of the University of Northumbria, this committee aims to ensure a comprehensive evaluation of emerging technologies.
AI holds the potential to revolutionise another crucial aspect of policing – prevention.
Specifically, its capacity to forecast potential crime locations and likely perpetrators has garnered attention.
While the concept may evoke scenes from the 2002 sci-fi thriller Minority Report, it is no longer confined to the realms of Hollywood fantasy.
A team at the University of Chicago has devised an algorithm that purports to predict future crimes a week in advance with an accuracy rate of 90%.
However, there are concerns over AI’s growing prevalence in tackling crime.
STOP’s Mr Cahn said: “In the US we see a lot of crime prediction tools that crudely deploy algorithms to try to predict where crimes will happen in future, often to disastrous effect.”
He added that it is disastrous because “the US has notoriously terrible crime data”.
Professor Oswald agrees that using AI to predict crime has its drawbacks.
She says: “There is that feedback loop concern that you’re not really predicting crime, you’re just predicting the likelihood of arrest.
“The issue is that you are comparing a person against people who have committed similar crimes in the past, but only based on a very limited set of information.
“So not about all their other factors, and those other things about their life that you might need to know in order to make a determination about someone.”