Safeguarding Communities with An Innovative Automated Content Moderation Platform

Safeguarding Communities with An Innovative Automated Content Moderation Platform

The internet is a powerhouse of interesting information. At the same time, it can be a dark and scary place, with known and anonymous trolls, harassment, pornography and illegal content. Artificial Intelligence (AI) tools can be utilized to keep a track of such harmful content. A company which pioneers in protecting online communities from abusive user-generated content powered by AI is Two Hat Security. In an exclusive interaction with Analytics Insight, Chris Priebe, Founder and CEO of Two Hat Security explains how the company has developed a highly precise and automated content moderation platform to safeguard communities 24/7 globally.

Kindly brief us about the company, its specialization and the services that your company offers.

Two Hat Security is an AI-based technology company that empowers gaming and social platforms to grow and protect their online communities. Our flagship product Community Sift is an enterprise-level content filter and automated chat, image, and video moderation tool. Online communities use Community Sift to proactively filter abuse, harassment, hate speech, adult content, and other disruptive behavior.

Our newest product is CEASE.ai, a cutting-edge AI model that detects new child sexual abuse material (CSAM; formally known as child pornography) for law enforcement and social platforms.

How is IoT/Big Data/AI/Robotics evolving today in the industry as a whole?

We are beginning to see the second era of AI in content moderation. In the first era of AI, we could find 80-90% of the problems. That was a problem because no one wants to miss suicide threats 10% of the time. Who would want to block out 20% of the conversation because the AI is confused. It had some uses for quality control and triage purposes, but it could not be on the front line. One challenge was that most of the research was being done in natural language processing. But there is nothing natural about how people subvert the technology. The moment you taught the AI to find "badword" it missed "b4dw0rd," then "B.შ.ⅆ.?.?.r.d" with the only limit being the free time of 500 million teens sitting in front of a computer. So, AI could predict what problem you had yesterday, but the moment it was deployed, it was obsolete, because those who wanted to do harm no longer would act like that – they would find another manipulation.

In the second era, we have now achieved nearly 99% accuracy on finding pornography. We do that by taking multiple AI systems and blending them all together, so each fill in the blind spots of the other.

Further to that we invented a new discipline of AI called uNLP (Unnatural Language Processing) that finds the new ways that people abuse each other. And when we hit 99%, we'll be ready for a new discussion about what AI can do. To be clear, we will always need humans, or HI (human intelligence) to handle the subjective, hard decisions that require empathy and deep context.

How does your company's rich expertise help uncover patterns with powerful analytics and machine learning?       

We process over 27 billion messages every month and are growing rapidly. We think of what we do as similar to antivirus technology. No one wants to build their own antivirus technology. It's not that they can't, as many companies are smart enough to build the technology, but if they built it themselves, they would have to always be the first to be infected. we look for patterns that we call "social viruses." We diligently provide quality control on that and roll it out around the world. New terms are created online all the time. A few years ago, we picked up on the term "blue whale challenge" and saw that on some of our networks. We were then able to add that to our signatures and protect the rest of our network in advance.

AI is projected to be the next market. How is AI contributing to the making of your products and services?    

AI is indeed a huge market but there is a bit of hype around it. There is an expectation that AI will take over human jobs which is a big mistake. I was greatly influenced by J.C.R. Licklinder's work "Man-Computer Symbiosis." It is not the job of AI to be human. Computers should do what computer do well, like processing billions of documents, counting how many times things occur, comparing them to known patterns, etc. Likewise, humans should focus not on trying to read billions of documents but on being human – looking for abstract connections, empathizing, making subjective calls based on a non-linear intuition.

When we get this backward, we end up hiring humans to be drones and computers deciding the fate of humans. Instead, computers should be maximized to process nearly unlimited documents at the speed of light, summarize the data against common patterns, then present it to humans who interpret it for meaning and make decisions – which in turn inform the computer of what data to collect and present next time.

This is highly relevant to content moderation. It is impossible for humans to keep up with the billions of pieces of content that are posted on social networks. So, instead, we use computers to process that content and escalate to humans the items that require human intelligence (HI) to make the right decision – which in turn informs the computer to find better content next time.

What is the edge your company has over other players in the industry?

We've been doing this for a long time. I've been in this space myself for 20 years and I formed Two Hat nearly seven years ago. We're battle-tested. We deliberately started by working with the largest social sites in the world. And, it was deliberate that we started with the gaming industry and with kids' products. No one knows how to push the technology further than 500 million tweens determined to get around a chat filter. To many of them, it's game – and we have gotten good at their game.

Other companies will come along, and I hope many of them create amazing AI. The problem is that the moment they go live, the audience will shift. AI can only be trained on what happened in the past – but people aren't robots. As soon as they realize they will get caught they will use a new pattern, and the AI will be obsolete. It is here that we've been tested – the day after social platforms go live with our filter

Could you highlight your company's recent innovations in the AI/ML/Analytics space?

We are proud of the work we have done building a model called CEASE.ai to detect child exploitation. As we talked to sites about removing pornography some of them cared but what really mattered was no one wanted child pornography (now called child sexual abuse material, or CSAM) on their site. The challenge is that it's illegal to have the material so no one could train on it.

Through several years of effort, we found a way to work side by side with law enforcement (the other "hat" of Two Hat Security) so they could train the models in their secure environment. This was great because the volume of images is growing exponentially, and investigators are swamped with massive caseloads.

So, we could help them get through their images faster and rescue abused children – and at the same time create models the industry desperately needed but no one could make.

When we first set out to do this, no one really wanted to train AI that they could not see. It was like playing 20 questions where you send an experiment off and slowly work your way to the right answer. However, we were fortunate to find some brave universities in Canada and a lot of grants to help us build the model.

What is your roadmap for the content moderation solution market?

We are investing heavily in the image filtering space. This is funded by over $2 million in grants and programs to give us access to the brightest minds we can find. It led us to acquire the image recognition and visual search company ImageVision. We've now achieved a new industry benchmark of 98.97% F1 and accuracy score, which beats our old at 97% and the best cloud provider at 96.97%. We did this through training multiple models and ensembling or blending them together. Our next step is to continue to train new models and blend them with more models to give us even higher accuracy.

On the text side, we are expanding from 20 to 30 languages. Currently, our English is at about 99.5% accuracy, but our other languages are just above 95%. We want to bring all of that to 99.5% or higher.

For text, we haven't been able to use traditional AI in production yet as the quality is just not high enough. Instead, we use AI to do quality control on our filter and recommend new signatures for our team to approve and the role that into our uNLP algorithm. As of today, neural nets, decision trees, and linear regression models do not work well as a filter when high accuracy matters.

However, they do work well if you take the other features from our uNLP program and look at the whole conversation to predict if a moderator will action on reported content or not. So, you can use the filter to remove the worst content, and if something goes through you can let users report it. You take the decisions moderators make about the report and you predict what they will do with future reports. This allows us to automatically remove about 50% of the workload as many of these are pretty simple and the remaining content needs a human to review.

One of the surprising discoveries we had while building "Predictive Moderation" was that the filter severity level was by far the number one feature the other AI system used to predict what action to take. This is an important discovery.

Up until now, our biggest competitor has been the big social networks often want to build filters and content moderation solutions themselves. That makes some sense – they have lots of data and smart people working for them. But what we are seeing is that if you blend different AI systems together that are trained differently (like we do with our images), then you get better results. Since the problem of abuse, harassment, hate speech and suicide threats is so big and so important, we are trying to find ways to provide our system as one of many features to other people's AI to give them a lift and a better answer.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net