Facial Recognition can Reveal Your Political Orientationby Puja Das January 31, 2021
Facial Recognition Technology is now Capable of Identifying Political Views
Today the evolution of biometric identification technique backed by facial recognition technology has paved its way toward full automation, ubiquity, and jaw-dropping accuracy. It kicks in every time one unlocks an electronic device with one’s face, cross the border between some countries, or walk by the sensors of a modern employee attendance control system. Amid the COVID-19 pandemic, the tech has superseded tickets as a touchless way to enter stadiums and enjoy sporting events in New York and Los Angeles.
Facial recognition technology (FCT) is also a powerful instrument for law enforcement agencies to trace criminals and for governments to monitor their citizens. It is a stronghold of the Skynet mass monitoring system deployed in China, with over 600 million cameras having been installed across the country.
Researchers claim that facial recognition technology is now capable of exposing people’s political orientation from naturalistic facial images, as per the report published in Scientific Reports published by the Nature group.
The research led by Michal Kosinski, Associate Professor of Organizational Behaviour at Standford University, suggests that FCT can accurately identify individuals’ political orientation, whether they have liberal or conservative views. A common facial recognition algorithm was applied to images of over a million people drawn from their entries on Facebook or dating apps to guess their political orientation by comparing their similarity to faces of liberal and conservative others. Political orientation was accurately categorised in 72% of liberal-conservative face pairs, which is significantly better than chance (50%), human accuracy (55%) or even one delivered by a 100-item personality questionnaire (66%).
A paper by Kosinski, David Stilwell and Thore Graepel in 2013 disclosed the astonishing granularity of personal information, such as sexual orientation, ethnicity, religious and political views, personal traits, age, and gender, which could be detected by a study of individuals’ Facebook ‘likes’.
Then Kosinski started studying the capacity of machine learning (ML) algorithms. In 2018, Michal Kosinski and Yilun Wang reported research revealing that deep neural networks are more accurate than humans at identifying sexual preferences from facial images. And today, the researchers have discovered an algorithm that appears good at working out political views from individuals’ faces.
Kosinski’s findings have been adversely criticised. Some claim that his work embraces the pseudoscientific concept of physiognomy or the idea that an individual’s character or personality can be assessed from their appearance. Some criticise that Kosinski’s study had shown how the obvious differences between LGBTQIA+ and straight faces in selfies involved with grooming, presentation and lifestyle- that is, segregation in culture, not in facial structure. In short, the concern is not people’s topology, but the way people present themselves on social media.
Many of the most impassioned criticism of Kosinski’s work came from members of social groups that rightly fear that racial recognition technology hardly reinforces and legitimises the gender, sexual and ethnic biases which are endemic in societies. There is another fear. If we see China’s harsh exploitation of facial recognition technology, it may lead to discrimination, exclusion and maybe even genocide. The argument is that by publishing a research report that appears to lend credence to the tech industry’s claims for facial recognition, Michal Kosinski is effectively making it respectable in corporate and authoritarian spaces.