Can Bullying be Stopped with Technology?

by February 20, 2019 0 comments

AI

The blessings of AI helping to stamp out bullying extend to saving lives too. Research estimates that some 3,000 people around the world take their own lives each day owing to bullying. This means one death every 40 seconds.

 

Bullying can leave a long-lasting scar on people’s lives, which may cause a long-term damage to their future, wealth, relationships and health parameters. As the human mind spends an increasing amount of time online, the mind is exposed to different forms of bullying which might be faceless but can be just as devastating. Research estimates that young minds subjected to cyber bullying suffer more from depression and are at least twice as likely to self-harm and attempt suicide. Worrying isn’t it?

Cyber bullying has been a hot topic lately in the organisational circuit with many tech companies facing the wrath for not providing effective harassment and hate speech policies. To do damage control; Facebook, Instagram’s parent company, has hired thousands of people to review content which potentially violates its rules of conduct. However, the race between the company and trolls who post offensive content continues to favour the bad for now.

 

Curbing Bullying in the Age of Social Media

The digital world has spared no one by its influence. Take the social media giants like Instagram for instance, a survey conducted found that 42% of teenage users have experienced some kind of bullying on Instagram, while in some extreme cases; distressed users have also killed themselves. And the bullying does not stop at teenagers or the youth alone, the lead guitarist of the rock band Queen Brian May is among those to have been attacked on Instagram.

 

Curbing Bullying on Instagram

The new head of Instagram, Adam Mosseri, accepted that online bullying is a complicated problem affecting a significant number of users. In Oct 2018, Instagram announced that it had started using artificial intelligence to detect cyber bullying in photos which were uploaded on its social network platform. The move was welcomed by the users as it highlighted the efforts of tech companies to use automation in their moderation process.

The photo-sharing giant deploys AI-powered text and image recognition to detect bullying into videos, captions and photos posted on its webpage. Since 2017, the company has been using a bullying filter to mask toxic comments and recently began using machine learning to detect attacks on users’ character or appearance, in split-screen photographs for instance. Instagram says that actively identifying and removing contentious material is a crucial measure as it has been found that many victims of bullying do not report the incident themselves. This measure also allows action to be taken against the regular offenders posting offensive content.

 

Identifying Bullies on Facebook

From the last year onwards, Facebook began to deploy AI to identify posts from people who might be at the risk of suicide. The social giant trained its algorithms to identify patterns of words from the main post and the comments that follow to help it assess tendencies of a suicidal expression. These word patterns or word clouds are further integrated with other details including the time when they are posted. All this data is funneled into algorithms which review whether the user’s post should be reviewed by Facebook’s Community Operations team, which may raise an alarm if it thinks someone is at risk.

 

Damage Control with Chatbots

If you thought only social media giants are stepping ahead to control bullying, think again. Credit to technology, we have mobile apps like Woebot and Wysa that lets users talk through their problems with a bot which responds helping the user with approved treatments such as cognitive behavioural therapy.

An intelligent chatbot that aims to help victims report their accounts of workplace harassment accurately and securely is Spot.

Spot produces a time-stamped interview for the user to keep for them or submit to their employer, anonymously if necessary. Spot is built on the idea to turn a memory into evidence.

Another app, Botler AI goes one step further helping users with advice who have been sexually harassed. Botler AI is trained on more than 300,000 US and Canadian court case documents and deploys natural language processing to understand whether a user has been a victim of sexual harassment in the eyes of the law. The app generates an incident report, which a user can hand over to a human resource or the police at the time of the investigation. The first version of Botler AI was live for six months with an impressive 89% accuracy.

 

Research Initiatives to Control Bullying

Abusive speech is very tough to detect as people use offensive language for all sorts of reasons, and often it happens that some nasty comments even don’t use offensive words. To answer this concern, researchers at the McGill University in Montreal, Canada, are training algorithms to detect hate speech giving training data from specific communities at Reddit who target black people, overweight and women by using specific words.

Scientists working at the Vanderbilt University Medical Center and Florida State University have successfully trained machine learning algorithms to look at the health records of patients who inflict self-harm. The algorithms can predict with an accuracy of 92%, whether a patient would attempt to end their life in the week following an instance of self-harm.

With research, Bots and Social Media giants working in tandem to curb bullying, the battle to fight out the bad has just started. With our increasing access to technology, the bullies are on the edge to cause harm.  In 2019 and beyond, disruptive technologies powering computers with artificial intelligence have a major role to play to spot and deal with cases of harassment and bullying.

No Comments so far

Jump into a conversation

No Comments Yet!

You can be the one to start a conversation.

Your data will be safe!Your e-mail address will not be published. Also other data will not be shared with third person.