As we continue to talk about artificial intelligence, in the context of ChatGPT and similar chatbot services, I think we are losing focus of the real threats AI poses towards public safety and job security.
Already, companies like Clearview AI work in tandem with the Federal Bureau of Investigation, using facial recognition technology to ‘close cases faster and keep communities safe.’ New York Mayor Eric Adams recently unveiled the K5 robot made by Knightscope, which acts as a security surveillance robot. Given the controversial status of police in America in the past decade, one would think that a seemingly unbiased and technically perfect and efficient robot would be a sign of progress. This could not be further from the truth.
The trouble with using AI in any predictive capacity is that it operates on an inherent bias, as it can only extrapolate new data from existing data. If you were to use AI software to predict what candidates would make the best new hire for a job, it could only give you an answer based on previous job applications from existing employees.
For policing, AI’s predictive power is mostly limited to data from past arrests which is fed into machine learning algorithms. This data is inherently flawed, as it includes numerous false arrests that disproportionately identify people of color. This biased data is already being used, and has been used since the early 1990s, to identify ‘high-risk’ communities that require more policing. As crime tends to appear most where there is higher police presence, allowing biased data only serves to create more biased data.
A trial use of facial recognition software by the London Metropolitan Police Department in 2018 showed while police were able to identify 104 previously anonymous criminal suspects, only two of those identifications were actually accurate. Furthermore, a study from North Carolina State University explains that many cops demonstrate a failure to show any understanding of how to use the AI technology they express immense support for. A recent report from the Government Accountability Office states only 5% of FBI agents have taken the required three-day training course that teaches them how to use facial recognition software.
Overall, the use of AI in law enforcement seems to have little to do with promoting public safety, despite many U.S. politicians claiming just that as one of their current goals. Mayor Adams has gone on record to say that he is against abusive policing, but not police departments as a whole. This explains his support for technology which he claims to believe will improve policing without violating citizens' civil rights.
Going all the way to the top, President Biden implemented the Safer America Plan last summer, providing an additional $13 billion to police budgets in a country with one of the best funded police systems in the world. The plan also called for the recruitment and training of over 100,000 new officers.
This is a far cry from the calls to defund police departments in favor of social programs and mental healthcare services from many activists in the summer of 2020, before Biden was elected to office. There is a plethora of supporting evidence for these calls, as incarceration does little to prevent reoffenders and our country is no safer because of increased police presence.
As appealing as it may be to some that we could use AI technology to prevent crime before it happens — not unlike the 2002 film “Minority Report” — the focus on technology to ‘improve’ policing in America is largely a distraction from several key issues. You would be hard-pressed to find a U.S. politician interested in improving social programs and targeting the root causes of violence and crime as a method to reduce crime despite fervent public support and evidence that supports the notion that doing so would more effectively reduce crime rates. The use of AI in policing won’t necessarily make police worse at their jobs, but will rather amplify existing issues and make them even lazier.