Facebook has been increasingly deploying smart analytics to fight off terrorist-related content on its site from the past three quarters and further claims to have proactively found and removed 99% of content under contention. In a release of some insights into its processes bring statistics, detection systems, and bots that bought this change.
By terrorism, Facebook refers only to ISIS and Al-Qaeda against which the social site major took action on 9.4 million pieces of content in Q2 2018, a figure which declined to 3 million in Q3 2018, credit to its efforts it took before this quarter. On an average, the firm claims it now removes terrorist content in less than two minutes after it is posted, versus the average of the 14 hours it took earlier this year.
New Age Detection Systems putting Bots into Action
Facebook has launched a new age detection system powered by a new machine-learning tool to assess whether posts signal support for ISIS or Al-Qaeda. This detection system produces a score indicating how likely posted updates violate their counterterrorism policies, with the ones that receive higher scores passed to its human reviewers for assessment. For the highest-scored cases, posts are removed automatically from the systems. It is only in the rare instances that employees find the possibility of an imminent harm, making Facebook to immediately inform law enforcement agencies. Facebook is poised to hold a tough approach to terrorism and is relying almost exclusively on algorithms to do that. Deploying bots into action is an added advantage as humans could never scan that much information on a quick pace that is now done, thanks to technology. This has made Facebook play a dominant role as judge, jury, and executioner on the information age.
Online terrorist propaganda is a fairly new phenomenon wherein the real world, terrorist groups have proven highly resilient to counterterrorism efforts. Thus it shouldn’t surprise anyone that the same dynamic is true on social platforms like Facebook, that the more it is done to detect and remove terrorist content, the shrewder do these groups eventually become.
Change in Dynamics
Sometimes these tactics can be anticipated but there is no question that the dynamic has strengthened an ability to successfully fight online terrorism. The social media giant has not revealed too much about its enforcement techniques that might lead to adversarial shifts by terrorists. But it is believed that it is important to provide some sense of what it is being done, that includes informing law enforcement agencies the rare instances when any possibility of imminent harm is identified.
Harnessing the New Machine Learning Tool
Facebook has provided information on its enforcement techniques in the past and further wants to describe the broad terms of the new tactics and methods that may prove effective in the long run. The social media giant uses machine learning to assess its posts that may signal support for ISIS or al-Qaeda. Facebook has developed a tool that produces a score indicating how likely it is that the post will violate its counterterrorism policies, which, in turn, assists its team of reviewers to prioritize posts with the highest scores. In this way, the system ensures that its reviewers are able to primarily focus on the most important content.
Facebook will automatically remove posts when the tool indicates a very high confidence that the post supports terrorism, leaving specialized reviewers to evaluate more posts for a dent into security.
Improvements to Existing Tools and Partnership Alliances
Facebook has further improved several of its existing proactive techniques and to more effectively detect terrorist content in the times to come, its experiments to algorithmically identify violating text posts or what it calls as language understanding now work across 19 languages to share the digital fingerprints or hashes including video, image, text and audio with a consortium of tech partners like Microsoft, YouTube and Twitter organized by the Global Internet Forum to Counter Terrorism (GIFCT).
Facebook’s analysis indicate that the time-to-take-action is a less meaningful measure of harm than metrics which focus more on the content exposure as a piece of content might get a lot of views within minutes of it being posted and probability exists that it may largely be unseen for days, weeks or even months before being viewed or shared by another person. Terrorists are always looking to circumvent the detection matrices and there is a need to counter such attacks backed with technology improvements, process and training. Over the passage of time, these technologies improve and get better, but during their initial implementation, such improvements may not function as quickly as they would at their maturity phase. This may lead to a delayed time-to-action, despite the fact that such improvements are critical for a robust counterterrorism effort. A narrow focus on the wrong metrics may prevent social networks, from doing our most effective work for the community.
There is still a long way to go to combat terrorism effectively. With the passage of time, terrorists have devised more dangerous paths to counter security measures and Facebook is on the path to understanding a macro responsibility to counter this threat and remain committed to it.