ML Data Poisoning A Time Ticking Threat to Cybersecurity and AI

ML Data Poisoning A Time Ticking Threat to Cybersecurity and AI

Will Data Poisoning inhibit the Future of Machine Learning-based Systems?

As machine learning becomes more prevalent, its training requirements grow exponentially. This is because machine learning algorithms are first trained on a certain input data feed, before they can self-learn through multiple iterations. But what if the training data itself is flawed? It is obvious that the machine learning model will pick up on those errors, biases and wrong datasets to produce a faulty outcome.

Let us consider a simple example: When you search for 'dogecoins' in Google Images, you will find an assortment of dogecoin memes, pictures of Shiba Inu dog breed and also irrelevant dog images that aren't related to dogecoin nor Shiba Inu. Google leverages recommender engines and artificial intelligence to present us with image suggestions. Imagine, if a user searches for something vaguer, chances are she will find even weirder collection of Google Image suggestions.

Again let us consider another example: If you search images of Ben Affleck and Gal Gadot, while you may see suggestion like "Ben Affleck Batman" "Dazed and Confused", "Casey Affleck", "Pearl Harbor" for the former, Gal Gadot Google Image Search results may focus less on her Hollywood work and incline towards a sexist take. While we cannot blame Google for this bias, its artificial intelligence happens to gravitate on most viewed search results, before showcasing us those results.

If one takes into account these illustrations, it is quite obvious to understand why data quality is integral in machine learning and other artificial intelligence algorithms. If this data is tampered with, evasion attacks, poisoning attacks and backdoor attacks, there is no way to identify such instances due to blackbox nature of these models. Moreover, unlike our mysterious brain functions, machine learning relies on mathematical reasoning and rules to understand the data – which may not be logical every time. Hence, injecting false data designed to malign the training data can change the decision-making ability of ML models – basically their Achilles' Heel!

In recent years, there has been a surge in data poisoning attempts including a variety of threat models and attack and defense methods. So, let us first understand what this data poisoning is.

Data poisoning refers to instances when attackers deliberately influence the training data to manipulate the results of a predictive model. By corrupting the training data it can lead to algorithmic missteps that are amplified by ongoing data crunching using poor parametric specifications. Realizing the notorious ability of this emerging attack vector, hackers can jeopardize the functions of ML based system including ones used in cybersecurity frameworks. The most potent machine learning poisoning attack is one that poisons the training data to create a backdoor. In simpler words, the corrupt data teaches the system a weakness that the attacker can use later.

The history of machine learning poisoning attacks dates back to 2008 with the article titled "Exploiting Machine Learning to subvert your spam filter". This paper presented an example of attack on SPAM filters.

Machine learning data poisoning may occur either by corrupting a valid or clean dataset by or corrupting the data before it is introduced into the AI training process. In a paper titled, "An Embarrassingly Simple Approach for Trojan Attack in Deep Neural Networks," AI researchers at Texas A&M showed they could poison a machine learning model using a technique called TrojanNet. The TrojanNet does not modify the targeted machine learning model. Also, it doesn't require large computational resources like a strong graphics processor.

There are several common ways of poisoning data:

Poison through transfer learning: Attackers can teach an algorithm poison and then spread it to a new machine-learning algorithm with transfer learning. This method is the weakest, as the poisoned data can become drowned out by more, non-poisoned learning.

Data injection: In data injection, the attacker adds poisoned data to the training dataset. Here, attacker may not have any access to the training data nor learning algorithm but has the ability to augment a new data to the training set.

Data Manipulation: In this case, the adversary requires more access to the system's training data, especially to manipulate the data labels.

Logic corruption: Here, attacker can directly meddle with the learning algorithm to inhibit it from learning correctly.

It is believed that machine learning data poisoning will become a prominent issue in cybersecurity this year. Even a Gartner report cited by Microsoft, 30% of all artificial intelligence cyberattacks by 2022 are expected to leverage training-data poisoning, model theft, or adversarial samples to attack machine learning-powered systems. So, a data poisoning cyberattacks at government or military level might not sound fictitious anymore. Financial markets, law enforcement departments, and security bodies can also fall prey to the ramifications of feeding poisoned data to quantitative analysis software.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net