Is Artificial Intelligence Racism Proof?

by June 15, 2020 0 comments
Artificial Intelligence

Image Credit: Wired

Artificial Intelligence is often interchanged with the word robotics. Although AI might be the single most tremendous technology revolution of our days, with the potential to disrupt almost all aspects of human existence, it does not mean robots are not crucial in our digital world. From helping fight the recent COVID-19 by carrying infectious samples, medicines, food from one place to other to disinfecting public space, to helping manufacturing sectors in assembly lines to inspecting raw materials, robots are almost omnipresent. However, it is still far from being the silver bullet due to the bias of Artificial Intelligence.

Recently in May, when Microsoft proposed to replace their human editors with an AI, much was at stake due to Microsoft’s reputation of racist bot Tay. This AI was tasked to run its news and search site MSN.com. The decision comes after it had fired seventy-seven human editors and journalists responsible for MSN.com in the wake of pandemic and replace them with robotic AI editor. The software major had then claimed that its AI scans content, understand dimensions like freshness, category, topic type, opinion content, and potential popularity and before presenting it for the editors. The AI’s algorithm could also suggest appropriate photos to pair up with the content to help bring stories to life. Later the editor would curate the top stories throughout the day, across a diverse range of topics, and present them for their readers.

However, no one expected things to go sour when Microsoft’s AI editor confused mixed-race Little Mix singers in a recent news story. According to The Guardian, the AI had confused two members of the pop band Little Mix. They both happen to be women of color, in a republished story initially reported by The Independent. The software had attached an image of Leigh-Anne Pinnock to an article headlined “Little Mix star Jade Thirlwall says she faced horrific racism at school”. Although the algorithm does not do any original reporting, this incident undoubtedly highlights how racism continues to exist within AI algorithms, which are notoriously bad at recognizing people of color.

While, now the remaining MSN editors are struggling with the AI picking up and republishing stories about MSN messing up, the human editors being told to delete the story, to pile up on the embarrassment they were informed that AI might overrule them and republish it again. This comes in light when there is global outrage on the death of George Floyd with public mass rallies and celebrities speaking #BlackLivesMatter.

The singer, Jade Thirlwall, criticized the mistake on her Instagram, noting that the two band members were so frequently confused that it’s become a running joke and asked the news site to do better. Meanwhile, IBM CEO Arvind Krishna has informed the US Congress through a letter, that it is no longer offering its facial recognition or analysis software and firmly opposes technology that is used for mass surveillance, racial profiling, and violations of fundamental human rights and freedoms.

“In September 1953, more than a decade before the passage of the Civil Rights Act, IBM took a bold stand in favor of equal opportunity…Yet nearly seven decades later, the horrible and tragic deaths of George Floyd, AhmaudArbery, Breonna Taylor, and too many others remind us that the fight against racism is as urgent as ever,” Krishna wrote in the letter. He encouraged the members of Congress to use the momentum of the George Floyd protests to call for police reform and consider how biased facial recognition could play a part in law enforcement abuse if it isn’t addressed quickly. He further stressed that the national policy also should encourage and advance uses of technology that bring greater transparency and accountability to policing, such as body cameras and modern data analytics techniques.

This may not be the first time that modern technology showed an error in judgment due to biased data in software built up. Like, e.g. a few years ago an algorithm in Google was mistakenly classifying black people as “gorillas”. Now the tech company puts a high emphasis on ethical machine learning. Moreover, such bias existed long before AI. In February, the BBC wrongly labeled black MP Marsha de Cordova as her colleague Dawn Butler, just a week after the broadcaster featured footage of NBA star LeBron James in a report on Kobe Bryant’s death.

Misidentification and the harassment of minorities by law enforcement is also an issue at large. In April, Microsoft turned down an offer from a California law enforcement agency to add facial recognition technology to officers’ cars and body cameras due to concerns of women and minorities being unfairly targeted.

No Comments so far

Jump into a conversation

No Comments Yet!

You can be the one to start a conversation.

Your data will be safe!Your e-mail address will not be published. Also other data will not be shared with third person.