Did Humans Fail Against An Algorithm In Decision Making?

Did Humans Fail Against An Algorithm In Decision Making?

Who will win, a human or an algorithm? 

In the digital age, algorithms crept their way into our lives. And while there is a growing concern about its intrusion, it seems like people are willing to trust computers more than fellow humans, especially if a task is too complex, according to data scientists at the University Of Georgia. 

"Algorithms are able to do a huge number of tasks, and the number of tasks that they are able to do is expanding practically every day," said Eric Bogert, a Ph.D. student in the Terry College of Business. "It seems like there's a bias towards learning more heavily on algorithms as a task gets harder and that effect is stronger than the bias towards relying on advice from other people." 

The reason behind Eric's perspective is the work he did for his paper "Humans rely more on algorithms than social influence as a task becomes more difficult" with management information systems professor Rick Watson and assistant professor Aaron Schecter. The paper published in Nature's Scientific Reports journal studied 1,500 individuals' evaluation photographs in order to analyze how and when people with algorithms. 

For conducting this study, the team asked volunteers to count all the people in the photograph of a crowded place along with suggested answers from another group of people and a computer algorithm. After the volunteers began their work, there came a point when counting became more difficult. This is when it was observed that the group of volunteers followed the suggestion generated by an algorithm instead of their own count or the count suggested by the other humans. 

"This is a task that people perceive that a computer will be good at, even though it might be more subject bias than counting objects," Schecter said. "One of the common problems with AI is when it is used for awarding credit or approving someone for loans. While that is a subjective decision, there are a lot of numbers in there like income and credit score so people feel like this is a good job for an algorithm. But we know that dependence leads to discriminatory practices in many cases because of social factors that aren't considered." 

Facial recognition and hiring algorithms faced a lot of scrutiny in recent years along with computer algorithms. The reason? Their use has revealed several cultural biases in how they performed which caused inaccuracies while matching faces to identities or screening for qualified employees for a particular job role. 

Schecter also added that these biases might not be present when it comes to simple tasks like counting, but their presence in other trusted algorithms is what makes it so crucial to understanding how and why people rely on algorithms when it's time to make decisions. 

As a part of Schecter's larger research program into a human-machine collaboration which got a funding of $300,000 from the U.S Army Research Office, this study was conducted with determination. According to Schecter, "The eventual goal is to look at groups of humans and machines making decisions and find how we can get them to trust each other and how that changes their behavior. Because there's very little research in that setting with the fundamentals." 

Schecter, Watson, and Bogert are currently conducting a study on how people rely on algorithms when making creative judgments and moral judgments, like writing creative passages.

Related Stories

No stories found.
Analytics Insight