Why Companies Need to Resolve the Problem of Bias in AI

July 29, 2019

Bias in AI

Artificial intelligence is becoming more prevalent in modern companies and has reached into all concerns of people’s daily life as well. AI presents enormous value for any company in today’s disruptive age, assisting to unlock new insights from data and lead major decisions. But even with everything possible with AI, there is an issue that is creating more buzz in recent years, the unintended bias in AI.

There are two big reasons for AI bias – data and training. An algorithm is only as good as the data companies put into it. Nearly 2 years back, search engine Google had come under fire when a research found that a user while searching online for hands’ image, the image results were almost all white; but when searching for “black hands,” the images were far more disparaging depictions, including a white hand reaching out to offer help to a black one, or black hands working in the earth.


Understanding AI Bias

It is not just Google algorithms vulnerable from biased AI. Since technology becomes progressively omnipresent across every industry, it will become more and more significant to eliminate any bias in the technology. AI is vital and integral in every sector and application, as it is now being leveraged to help recruiters to find viable candidates, loan underwriters when determining whether to lend money to customers and even judges when considering whether a convicted criminal will re-offend.

Certainly, data can assist humans to make more informed decisions using AI, but if that AI technology is biased, the result will be as same as businesses put information into it.


Addressing the Problem of Bias in AI

In spite of research groups and government entities taking an interest in the potentially damaging role biased AI could play in society, the accountability largely falls to the businesses creating the technology, with whether they’re prepared to address the issue at its core or not. Some of the largest tech companies, including those that have been accused of overlooking the problem of AI bias in the past, are now taking leaps to tackle the issue.

For instance, tech giant Microsoft is appointing artists, philosophers, and creative writers to train its AI bots in the dos and don’ts of nuanced language like to not use ill-suited slang that inadvertently makes racist or sexist remarks. On the other hand, technology company IBM is trying to ease bias in its AI machines by implementing independent bias ratings to determine the fairness of its AI systems.

Last year, Google CEO Sundar Pichai issued a set of AI principles that intends to ensure the company’s work or research doesn’t create or emphasize bias in its algorithms.

Solving the problem of bias in AI requires individuals, organizations and government bodies to take a serious look at the roots of the issue. Business leaders are responsible for distributing the AI systems, which impact society, will necessitate offering public transparency so that bias can be monitored, integrate ethical standards into the technology and have a better understanding of who the algorithm is supposed to be targeting.

But, without proper government regulations, these types of solutions could delay in solving the issue. The European Union, for instance, has put in place GDPR (General Data Protection Regulation) – a set of rules that offers EU citizens more control over how their data is used online. With the help of private researchers and think tanks, the Government is moving fast in the direction and attempting to grapple with how to regulate algorithms.

In a nutshell, the best way to thwart AI bias is to use comprehensive sets of data that account for all potential use cases. And if there is an unfair data, IT leaders can look to external sources to fill the gaps and provide the system a more complete picture.