Microsoft’s chatbot Tay was taken down after it posted abusive content on Twitter
Artificial Intelligence (AI) is starting to be more than just a technology that exists to help humans. It is slowly invading people’s lives and is even making changes in the routine. Even though when AI looks like a futuristic technology that could do only good to humans, there are concerns on its bias.
The science fiction movies have given us a vague outlook on AI technology. The movie directors have portrayed AI robots either as a humble creature that falls in love or a vicious character that takes over humanity. To be precise, AI is neither of this. AI is more of a mechanism that abstracts and reacts to content like humans because it is designed and developed by humans. Then if you have questions on whether AI can adopt everything that humans including being biased to an ideology, yes! AI can absolutely do that. Think about the cruel world where people at every corner are demanding equality. Continuous protests and violence break outs across the globe to support issues like racism, feminism, LGBTQ+ community, etc are already on track. Even if the next generation lives with a broad mind to see everything equal, it will take at least a hundred years to make the societal changes. The problem here is that human data is an important substance to make AI function. That is where the AI-bias problem lies. AI-bias is the underlying prejudice in data that’s used to create AI algorithms, which can ultimately result in discrimination and other societal consequences.
We already have a lengthy record of cases where AI reacted like a biased mechanism. In 2016, Microsoft launched its first AI chatbot named ‘Tay’ to interact with people on Twitter. Tay was trained using a basic grasp of language based on a dataset of anonymised public data and some pro-written material, with the intension of subsequently learning from interactions with users. However, Tay tweeted over 95,000 times within sixteen hours of launch, mostly with abusive content. Microsoft took down the chatbot concerning societal issues. In May 2016, a stunning report claimed that a computer program used by a US court for risk assessment was biased against black prisoners. The program, Correctional Offender Management Profiling for Alternative Sanctions (Compas) was much more prone to mistakenly label black defendants twice the rate as white people, according to an investigation. These incidents are just a small piece of an ultra-large cake. Tackling AI-bias involves understanding the whole system and removing the poison from the root.
The reason behind AI-bias
Every human has a biased thought about something. Even seasoned professionals with good intentions can be influenced by biases, hindering the effectiveness of diversity and inclusion decisions. Patterns of discrimination have long impacted existing datasets. Data input for any AI device is collected from human actions. Henceforth, we can’t expect humans to have a mechanical mindset. But this could drag bias in the AI device as well. It looks simple from a general perspective, but think about your job application going through AI classification and it rejects you based on your race. AI feels like a conservative boss at the top.
A way out
Addressing the problem of AI-bias starts with knowing where and how it began. The people behind AI and its mechanism should be unbiased. Auditing decisions on who is recruited and promised is highly important. Going a step further and understanding who is given the opportunity of promotion, assigned the hardest projects or availed the chance to expand their internal networks can help us gain a clearer picture. The data inputs added to the AI device or robot should be filtered and bias-free. By being precautious in all these ways, AI can enter to a more inclusive future.