Every day we keep reading numerous accounts of the presence of discrimination in Artificial Intelligence algorithms. Be it Amazon-biased AI recruiting tool preferring male candidates over females or Google Translate making gender pronouns discrimination or instances of racial biases in facial recognition software’s that could identify lighter-skinned people but not darker-skinned ones. But the problem stems primarily because we are not sure what to look for in making an algorithm gender-neutral. Or maybe because the training dataset, we use, has information only on certain groups of people that leads to AI giving preference to a specific social group. Hence the quintessential question: Do machines understand what bias is?
Bias is a prejudiced thought against someone or something. And, algorithmic bias occurs when an algorithm produces results that are systemically prejudiced due to flawed assumptions in the AI models. And these algorithms have the power to shape our thinking process too, i.e., generating biased output that directly impacts users ranging from computer programmers to government and industry leaders. While the machine learning subset of AI has helped us in detecting cancer cells, weather forecasting, Netflix recommendations, and many more, any bias in its algorithms could negatively impact people or groups of people, including traditionally marginalized groups.
For instance, a few years ago, an algorithm in Google was mistakenly classifying black people as “gorillas.” In February, the BBC wrongly labeled black MP Marsha de Cordova as her colleague Dawn Butler, just a week after the broadcaster featured footage of NBA star LeBron James in a report on Kobe Bryant’s death. Meanwhile, months earlier, it was discovered that Apple’s new credit card might give higher limits to men than to women. Even the US’s PredPol’s algorithm is an example of how biased algorithms can cause severe ramifications to law and order. PredPol’s algorithm was designed to predict when and where crimes will take place to help reduce human bias in policing. However, it was discovered that it repeatedly sent officers to neighborhoods with a high proportion of people from racial minorities, regardless of the true-crime rate in those areas. Biased algorithms also lead to voice-based assistants, like Amazon’s Alexa, struggle to recognize different accents. All these instances point out that algorithmic bias can amplify the existing stereotypes prevalent in our society.
There are two sides to the coin. One states that AI can be used to identify the biases present in our system and society. While other states that our biased thinking results in making algorithmic biases. Let us consider a simple example. When we are a child, we are taught to paint the leaves green, sky as blue, and the Sun as yellow. Thus often, we used to follow the instructions without wondering if the same entities can possess different colors or not. Now imagine, if we teach the same to AI model, there are higher chances that it will also follow the same pursuit like us, where though we realize leaves, sky, and Sun can have other shades, but AI may not. This is how a simple thought or instruction results in creating a flawed output. Another example is when one searches images of famous personalities like actors or sportspersons in Google Images, we often find related tags differ vastly for both genders. Males may be associated with terms related to their profession, but females are associated with sexist words like ‘hot,’ ‘cute,’ and so on.
Mitigating this problem can be a slow and slog process. Leaders are now discussing the AI bias and asking to regularly audit the data and collect the training data from a wider representative perspective. The recent death of George Floyd opened eyes to rampant prejudices from the machines and technology. Computer programmers are experimenting on ways to spot and remove bias in data. Researchers are developing ways to make algorithms better able to explain their decisions. Then there is widespread demand to make the algorithms more transparent and eliminate the black box. Civilians are asking to make companies accountable for how they use algorithms. Yet, let us not forget that the primary pivotal question is how we define biases. That itself either makes it a problem or solution, diverging like two sides of the coin.