Facebook detects fake or exploitative posts and flags it with a warning label keeping off 95% of the users from clicking it.
Facebook is a powerful social media platform, to build communities and share trust worthy information. The fact holds true when the social media giant weeded out fifty million posts full of falsehoods about the coronavirus that stem out in April alone, add to it 2.5 million ads for COVID-19 test kits, hand sanitizers, face masks, surface disinfecting wipes and Covid-19 test kits. These products adverts came to light at the time when Facebook has temporarily banned ads and commerce listings for medical face masks amidst the growing concern over coronavirus-related exploitation.
The ban came to force to prevent scammers who try to profit from the fears and anxiety surrounding the Coronavirus pandemic. This flood of information on the platform accessed by billions daily worldwide is just a tip of the iceberg, identified as problematic and flagged or removed by Facebook.
Leveraging Artificial Intelligence to Fight Hate Speech
In its explanatory post, Facebook AI says, “These are difficult challenges, and our tools are far from perfect”. The social media giant regularly shares its initiatives on its efforts to fight hate speech and other problematic content. Facebook is working with computer vision, to address the fake malignant news and shares. But in its own words, feels that addressing these problems requires an extensive toolkit of AI technologies, like multimodal content understanding.
Facebook heavily relies on human fact-checkers and collaborates with 60 fact-checking organizations around the world and adds up on AI to supplement the scrutiny done by the human eyes. The results were encouraging, in the month of April 2020, the social media giant flagged warning labels on about 50 million pieces of COVID-19 related content, based on around 7,500 articles scrutinized by its independent fact-checking partners. This move kept away about 95 per cent of its users from clicking through to view the misleading content.
Facebook’s Similarity Detector
To identify misinformation related to articles spotted by fact-checkers, Facebook’s systems had to be trained to detect images that appeared similar to a human eye, but not to an AI algorithm. A good example to explain this would be an image screenshot captured from a post. To the human eyes, the screenshot may appear to be the same as the picture, but to a computer, the pixels will be different. This is Facebook’s “similarity detector” powered by artificial intelligence which detects the difference between images that look graphically similar but carry different information to flag off fake and misleading posts.
Facebook CTO Mike Schroepfer addressing a press conference to dispel the concerns around fake news said, “What we want to be able to do is detect those things as being identical because they are, to a person, the same thing, Our previous systems were very accurate, but they were very fragile and brittle to even very small changes. If you change a small number of pixels, we were too nervous that it was different, and so we would mark it as different and not take it down. What we did here over the last two and a half years is build a neural net-based similarity detector that allowed us to better catch a wider variety of these variants again at very high accuracy.”
In addition to the similarity detector, Facebook is also applying its existing multimodal content analysis tools, to interpret the content posted on the platform. To block coronavirus product ads which can cause a panic among its 1.65 billion + users worldwide, Facebook launched a new system which extracts objects from images that violate its policy, adds those to a database, and then automatically checks objects in any new images posted against the database. This extensive database trains its classifier to find specific objects like face masks or hand sanitizer in the new images content posted on the platform, to flag it as suspicious. Besides, to improve accuracy, Facebook includes a negative image set to the model. This negative image set comprises of images that are not face masks, like, a handkerchief or a sleep mask that the classifier might mistake for a face mask.
Artificial Intelligence-based classification models are a huge step that amplify the efforts of the 35,000 human moderators employed by the giant, Schroepfer stressed that with these mechanisms in place, people will stay in the loop and control. “I’m not naive,” he said. “I don’t think AI is the solution to every problem. But with AI, we can take the drudgery out and give people power tools, instead of looking at similar images day after day.”
Though much work remains to be done, the officials at Facebook are confident, that they can “build on their efforts so far, to further improve the systems, and do more to protect people from harmful content related to the pandemic.”