Why is AI Considered a Misfit to Read Human Emotions?

Why is AI Considered a Misfit to Read Human Emotions?

Emotional AI is not a good option since it can easily misread emotions

AI has been reigning in the industries and business ecosystems with its unending capabilities to accelerate automation and provide business intelligence. Disruptive technologies like artificial intelligence, machine learning, blockchain, etc. have enabled companies to create better user experiences and advance business growth. Emotional AI is a rather recent development in the field of modern technology, and it claims that AI systems can read facial expressions and analyze human emotions. This method is also known as affect recognition technology.

Recently Article 19, a British human rights organization published a report stating the increasing use of AI-based emotion recognition technology in China by the law enforcement authorities, corporate bodies, and the state itself. The report reveals that this growing use will harm individual freedom, human rights, and freedom of expression. Hence, the publication outlines the need for strategic advocacy against the design, development, sale, and use of these emotion recognition technologies.

The 2019 annual report by the AI Now Institute recommends a ban on the use of affect recognition technologies and says that AI companies should stop deploying them. According to the report, this technology is built on shaky foundations and will impact people's lives and restrict their access to opportunities. This is not it, there are more recommendations and claims about the bias and detrimental impacts of AI-based emotion recognition.

How does it work?

Emotional recognition detects human emotions from facial expressions. However, while training the machine learning algorithms to understand human emotions, it should be based on a good database. Emotional AI systems learn to determine the relationship between emotion and its external manifestation from a large labeled database. Different types of emotions are detected by integrating the analysis and meaning from facial expressions, voice tone, gestures, speech patterns, and more. Machine learning algorithms fed with data act as the core structure for emotion recognition using AI. Deep learning is another disruptive tech that comes in handy while training AI systems to recognize and detect human emotions.

What is the use?

People in favor of emotional recognition and commercialization of the technology describe many potential use cases. Some of them are the identification of suspicious individuals by the state and law enforcement, use of facial emotion detection in smart cars, identifying the personality of a candidate in interviews, video game testing, and understanding the patients in healthcare. These are just a few and more use cases that might seem like privacy intrusion.

There are several examples of companies selling emotion recognition products like Sonde Health, Healium, and Inner Balance. These products are mostly targeted at letting the users assess their emotional state and mental well-being. It doesn't seem very harmful, isn't it? However, the AI Now Institute report calls out many companies like Oxygen Forensics, which sells facial recognition and emotion detection softwares to the Police, FBI, etc. Another company highlighted in the report is HireVue, which offers screen job applicants and affects recognition on them to decide their efficiency.

How Biased is Emotion Recognition Technology?

The widely accepted and used facial recognition technology is claimed to exhibit many biases. Although facial recognition is different from emotional recognition systems, facial recognition is primarily used for identification, but emotional recognition exceeds the limits by trying to detect a person's emotional state by analyzing their expressions. Infringement of human rights, risk of bias, and misread emotional analysis are the main concerns of this disruptive technology.

These systems can be misused in surveillance, imposing power by a nation, controlling the access of individuals, etc. Analyzing and categorizing humans based on their inner emotions sounds a bit out of place. Claimants argue that it should be prohibited before it grows. Well, it might have already grown. The Markets and Markets research suggests that the global emotion detection and recognition market is expected to reach USD37.1 billion by 2026, growing at a CAGR of 11.3%.

A study by Lauren Rhue found that emotion recognition technology assigns more negative emotions to people of certain ethnic races and labels these faces to be two times angrier than other faces. The problem with emotional recognition apart from the inherent biases is that our emotions are not limited to our facial expressions or voice tone. If a person is smiling, it doesn't always mean that they are happy inside. The speculations around emotional recognition and the claims to ban the use of it are revolving around. There are also statements that the technology can be useful if used responsibly.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net