Is Use of Emotion Analytics for Recruitment Processes Ethical?

by March 4, 2020 0 comments

Emotion Analytics

The age of analytics has transformed our lives in many ways. From analyzing things in professional to personal life, technology has influenced many. Specifically talking about the role of analytics merged with artificial intelligence (AI) in organizations, it has provided a new edge to numerous businesses. Even the HR and recruitment departments are today employing such advancements. The recruitment cells of the US and South Korea based organizations are making use of Emotion Analytics or Emotional AI to revamp the way candidates are hired. To address the loopholes of the recruitment process, AI and Analytics technologies prove to be beneficial in different ways. But when it comes to Emotion Analytics, is it really reliable?

Emotion Analytics intends to cross-reference facial movements and body language to analyze how suitable the person is for the role. The analysis can also inform if the person will be successful in the position. The Emotional AI algorithms are also supposed to identify when the candidate lies.

According to a report, the process can vary depending on the software but generally, the emotion analytics test begins with applicants completing a 20-minute task consisting of neuroscience-based video games, with each decision revealing something about their personality. Attitude towards risk features heavily in this section, as employers can supposedly learn if the candidate is a potential liability. This section is the first filtering of applicants based on their emotional characteristics. And what’s concerning with the gamified element is that many older candidates are unfamiliar with the format, meaning they are automatically at a disadvantage in the selection process.

The next step is the video interview. Pre-set questions are answered via a call on a mobile, tablet, or computer. AI technology then captures movements — both voluntary and involuntary — to assess the person’s mood and traits.
The data from both steps are collected and reviewed, and a score is generated to reflect the candidates’ level of ’employability’ for the job and in comparison to other applicants.

However, one of the world’s leading experts on the psychology of emotions has warned that Artificial Intelligence (AI) systems that companies claim can “read” facial expressions are based on outdated science and risks being unreliable and discriminatory.

Lisa Feldman Barrett, professor of psychology at Northeastern University, said that such technologies appear to disregard a growing body of evidence undermining the notion that the basic facial expressions are universal across cultures. As a result, such technologies – some of which are already being deployed in real-world settings – run the risk of being unreliable or discriminatory, she said.

Moreover, the research institute AI Now released a report in December last year stating that emotion analytics has no scientific basis and should be banned from influencing decisions about people’s livelihoods.

The report calls for businesses and governments to ban emotion analytics technology until a more in-depth study of the risks involved has been conducted. AI Now went further still to condemn the “systemic racism, misogyny, and lack of diversity” in the AI industry as a whole.

No Comments so far

Jump into a conversation

No Comments Yet!

You can be the one to start a conversation.

Your data will be safe!Your e-mail address will not be published. Also other data will not be shared with third person.