Can Police Brutality be Reformed Using Artificial Intelligence?

Can Police Brutality be Reformed Using Artificial Intelligence?

Data suggests that 94% of the officers are at minimal risk, 4% at advisable risk and 2% are at actionable risk.

The brutal custodial death of George Floyd has sparked worldwide protests. Not only it has revealed the bitter reality of police misconduct, but has also shredded light on the skewed judicial system. Though the protests started after George Floyd was killed due to gruesome racial bias, but police brutality has existed in the society for a long time.

Moreover, the USA is not the only country where the responsibility of the police is being questioned. In India, the custodial death of Father-Son duo Jayaraj and Phoenix has put police accountability under heavy scrutiny. Nonetheless, it has also called upon the need for global policy reforms so that custodial deaths can be mitigated.

For a very long time, the use of technology such as Facial Recognition was held responsible for police misconducts in the USA. Owing to the algorithmic biases of Artificial intelligence, companies like IBM and Google pulled out from the research work of facial recognition when the death of George Floyd became a national issue. Furthermore, the big tech organizations and police have become more vigilant while utilizing artificial intelligence.

But, now this technology, which was partially blamed to be the reason for biases, is being utilized for police reforms. In this article, we will discuss two such technologies that have shown positive results.

AI-Based Intervention System for Officer Misconduct

One of the major challenges while investigating police misconduct is to look out for the misbehavior of an officer in the past. Derek Chauvin, the officer responsible for George Floyd's death has a history of displaying abusive behavior, but this brutality was not acknowledged. Moreover, it becomes extremely difficult to track down the behavior of a cop, due to the lack of substantial evidence. Had this been easier to track, George Floyd could have been saved from becoming another victim of Chauvin's abusive behavior.

Taking into cognizance the incidents that have conspired since George Floyd's death, the US police department has collaborated with Benchmark Analytics LLC to deploy an AI-based intervention that will be conducive for spotting officers, who are likely to be engaged in misconduct. Named as First Sign, the system's machine learning algorithm will then compare the present action of an officer by observing his/her past activities for any display of excessive force or problematic behavior. The system will also assign scores to each officer, flagging them for counseling, training other actions by department supervisors.

First sign analyses arrest records, stop and service-call information, use-of-force data, internal affairs reports, dispatch information and other data. Though the system doesn't categorize racial incidents, the biased-based policing can be measured by analyzing the race of the people involved in arrests and in-use-of-force incidents.

Currently, First Sign is deployed in Metropolitan Nashville Police Department in Tennessee, whereas Albuquerque Police Department in New Mexico and San Jose Police Department in California has signed for the system, which is still in developing stage.

The records retained by the First Sign states that the behavior of 920 officers has been recorded. The data states that 94% of the officers are at minimal risk, 4% at advisable risk and 2% are at actionable risk.

Axon Live Body Camera Video 

Another tool that has been developed to track police behavior is Axon's Live Body Camera. Developed by Rick Smith, Axon's CEO, the live camera utilizes artificial intelligence software to record data for police behavior. Named as Respond, this camera aims to digitize the police department's workflow. Moreover, it can classify use-of-force incidents, identify teachable moments and build early warning systems to flag bad cops.

Uncertainty looming over AI-technology

Despite these technologies are being deployed for a positive outcome, uncertainty is still looming about its use. Experts say that these innovations are heavily governed with data. If the data is either bad, corrupted or lacks reliability, then the desired result will be altered. There is no certainty about the success of these technologies, and hence the user needs to be extremely careful while utilizing AI in reforming the police.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net