In an era where artificial intelligence (AI) is becoming increasingly integrated into police operations globally, questions arise about the equilibrium between the advantages and potential pitfalls of this technology.
Consider Sarah, a victim of domestic abuse, seeking help through a 999 emergency call. While engaged with a human call handler, her information is simultaneously transcribed by an AI system linked directly to UK police databases. In a simulated scenario during a three-month trial by Humberside Police, the AI swiftly accesses details about her ex-husband, including a gun license, prompting the need for urgent police intervention.
This AI, provided by UK start-up Untrite AI, aims to enhance efficiency in handling the high volume of emergency calls received daily. Kamila Hankiewicz, CEO of Untrite, explains, "The AI model analyzes a lot of information, producing a triaging score to determine the urgency of the situation." The trial suggests potential time savings of nearly a third for operators, marking a significant step forward in emergency response technology.
However, as AI gains traction in law enforcement, concerns arise over its application in areas such as facial recognition. Reports from the US highlight the failure of AI-powered facial recognition software in accurately identifying black faces, leading some cities, including San Francisco and Seattle, to ban its use. Albert Cahn of Surveillance Technology Oversight Project criticizes the continued investment in such technology, citing discrimination against specific racial groups.
The UK's Policing Minister, Chris Philp, has advocated for increased use of retrospective facial recognition technology. Despite improvements in accuracy reported by the National Physical Laboratory (NPL), concerns persist regarding the potential for false positive identifications, particularly for individuals with darker skin.
West Midlands Police takes a proactive approach by establishing its own ethics committee, chaired by Prof Marion Oswald. The committee evaluates new technology tools, such as facial recognition, emphasizing the need for thorough analysis of their validity.
Looking ahead, AI's transformative potential extends to crime prevention. The University of Chicago has developed an algorithm claiming to predict future crimes a week in advance with 90% accuracy. However, skepticism arises due to the reliance on biased historical data, as noted by Stop's Albert Cahn. Prof Marion Oswald echoes these concerns, emphasizing the limitations of predicting crime solely based on a person's past criminal associations.
As AI continues to shape policing methods, the delicate balance between harnessing its benefits and mitigating inherent risks becomes paramount. The evolution of AI in law enforcement demands careful consideration, ethical scrutiny, and ongoing evaluations to ensure just and unbiased outcomes.