Misidentification, A Huge Menace? Finding Facts on Facial Recognition

Misidentification, A Huge Menace? Finding Facts on Facial Recognition

The accuracy rates of facial recognition algorithms are particularly low in the case of minorities, women, and children

Facial recognition technology has penetrated almost every market. From surveillance cameras to unlock features in smartphones, this artificial intelligence discipline has become a key part of our daily lives. Some months ago, the startup FDNA created the DeepGestalt algorithm which identifies genetic disorders from facial images. While this technology has proved resourceful, not everyone seems to be on board with it. People are questioning its access and collection of unlimited databases connecting names and faces and are also accusing it of privacy invasion. Further, the outrage due to the brutal death of George Floyd sparked debates about the existing bias in facial recognition technology.

For criminal identification, the technology treats each person captured in images from CCTV cameras and other sources as a potential criminal, creating a map of her face, with measurements and biometrics, and matching the features against the CCTNS database. This means that we are all treated as potential criminals when we walk past a CCTV camera turning the assumption of "innocent until proven guilty" on its head.

Facial recognition technology is notorious for racial and gender bias. In February 2020, the BBC wrongly labeled black MP Marsha de Cordova as her colleague Dawn Butler. In June, Microsoft's AI editor software had attached an image of Leigh-Anne Pinnock to an article headlined "Little Mix star Jade Thirlwall says she faced horrific racism at school." So, misidentification and the harassment of minorities due to AI is common problem these days. A 2018 MIT study of three commercial gender-recognition systems found they had error rates of up to 34% for dark-skinned women, which is nearly 49 times that for white men.

In April 2018, Bronx public defender Kaitlin Jackson was assigned to represent a man accused of stealing a pair of socks from a TJ Maxx store. The man said he couldn't have stolen the socks because at the time the theft occurred, he was at a hospital about three-quarters of a mile away, where his son was born about an hour later. 

Jackson couldn't understand how police had identified and arrested her client months after the theft. When she called the Bronx District Attorney's Office,  a prosecutor told her that police had identified her client from a security camera photo using facial recognition. A security guard at the store, the only witness to the theft, later told an investigator from her office that police had sent him a mugshot of her client and asked in a text message "Is this the guy?" Jackson calls that tactic "as suggestive as you can get."

Jackson's questions led a judge to order a hearing to determine whether the identification process had been unduly suggestive. Shortly afterward, Jackson says, prosecutors offered her client a deal: Plead guilty to petit larceny in exchange for a sentence of time served. The client, who had been in jail for roughly six months, agreed.

The prosecutor told Jackson how her client had been identified was unusual. Across most of the US states, neither police nor prosecutors are required to disclose when facial recognition is used to identify a criminal suspect. Defense attorneys say, that puts them at a disadvantage: They can't challenge potential problems with facial recognition technology if they don't know it was used. It also raises questions of equity, since studies have shown that facial recognition systems are more likely to misidentify people who are not white men, including people with dark skin, women, and young people. 

"Facial recognition technology use shouldn't be a secret," says Anton Robinson, a former public defender now at the Innocence Project, a nonprofit dedicated to getting people who've been wrongly convicted out of prison. "It's such a big issue in criminal cases. Attorneys shouldn't be left to have these epiphany moments."

Misidentification is historically a huge factor in sending innocent people to prison. The Innocence Project found that more than two-thirds of people exonerated through DNA evidence had been misidentified by witnesses, making it the leading factor in these convictions. Eyewitnesses can struggle to identify people they don't know, especially when those individuals are of different racial or ethnic backgrounds.

Accuracy rates of facial recognition algorithms are particularly low in the case of minorities, women, and children, as demonstrated in multiple studies across the world. The use of such technology in a criminal justice system where vulnerable groups are overrepresented makes them susceptible to being subjected to false positives. Image recognition is an extremely difficult task and makes significant errors even in laboratory settings. Deploying these systems in consequential sectors like law enforcement is ineffective at best, and disastrous at worst.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net