Bias Cases in Artificial Intelligence

Bias Cases in Artificial Intelligence

Artificial intelligence bias (AI bias) is an abnormality in the results of ML algorithms caused by biased assumptions

More people and companies are becoming interested in artificial intelligence (AI) as they see its advantages in a variety of applications.

Artificial intelligence bias (AI bias) is an abnormality in the results of ML algorithms caused by biased assumptions made during the algorithm development phase or biased training data.

Here are 3 bias cases in AI:
Bias in Facebook ads

There are many instances of human bias, and we observe it on tech platforms. These biases result in biased machine learning models since data from tech platforms is later utilized to train them.

Facebook started allowing advertisers to specifically target ads based on race, gender, and religion in 2019. For example, women were given preference in employment advertisements for nursing or secretarial positions, whereas job advertisements for janitors and taxi drivers were mostly displayed to men, particularly those from minority backgrounds.

As a result, Facebook will no longer permit businesses to tailor their ads based on age, gender, or race.

Amazon's biased recruiting tool

In 2014, Amazon started working on an AI technology to streamline the hiring procedure. Their research focused primarily on screening resumes and assessing candidates using AI-powered algorithms, saving recruiters' time from manual resume screening activities. Amazon, however, noticed in 2015 that its new AI hiring algorithm was biased against women and was not objectively grading candidates.

Amazon had trained their AI model using historical data going back ten years. Since men predominated in the IT field and made up 60% of Amazon's workforce, historical data held biases against women. As a result, Amazon's hiring software mistakenly discovered that male candidates were preferred. The term "women's," as in "women's chess club captain," was penalized on resumes. As a result, Amazon ceased applying the algorithm to hiring.

Racial bias in healthcare risk algorithm

More than 200 million Americans use a health risk-prediction system, but it showed racial bias because it employed a flawed criterion to assess need.

The algorithm was created to identify patients who would probably benefit from additional medical attention; however, it was later discovered that the program was giving incorrect findings that favored white patients over black patients.

The algorithm's creators substituted prior patients' medical expenses for actual medical needs. Since wealth and race are strongly associated metrics, this was a poor interpretation of the historical information because the algorithm produced false findings when assumptions were based on only one component of the linked metrics.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net