How to Address Adversarial Attacks in Machine Learning

How to Address Adversarial Attacks in Machine Learning

Understanding and Resolving the Threats of Adversarial Attacks in Machine Learning

Big data-infused machine learning (ML) and deep learning has yielded impressive advances in many fields. However, recent advances in adversarial training have found that this is an illusion. However, in spite of the increasing body of research on adversarial attacks in machine learning, the numbers indicate that tackling adversarial attacks in real-world applications has made only a little progress.

The fast-paced adoption of ML makes it paramount that the tech community traces a roadmap to secure the AI systems against adversarial attacks. Otherwise, adversarial ML can be a disaster in the making.

Emerging Adversarial Attacks

Each type of software has its own unique security vulnerabilities, and with new trends in software, new threats arise. For example, web applications with database backends have started replacing static websites; hence, SQL injection attacks became prevalent. The broad adoption of browser-side scripting languages has been increasing cross-site scripting attacks. Buffer overflow attacks overwrite critical variables and execute malicious code on target computers by taking advantage of programming languages like C, C++ handle memory allocation. Deserialisation attacks exploit flaws in the programming language like Java and Python transfer information between applications and processes. And recently, a surge in prototype pollution attacks has been noticed. It uses peculiarities in the JavaScript language to cause erratic behaviour on NodeJS servers.

Adversarial attacks are not different than other cyber-threats in this context. As ML becomes an important component of many applications, bad actors will look for ways to plant and trigger malicious behaviour in artificial intelligence models.

Types of Adversarial Attacks

Adversarial attacks are categorised in two divisions. One is targeted attacks and the other is untargeted attacks.

The targeted attack has a target class, A, that it wants the target model, B, to classify the image X of class Y. Hence, the goal of the targeted attack is to make B misclassify by anticipating the adversarial example, X, as the intended target class A instead of the true class Y.

On the other hand, the untargeted attack does not have a target class to classify the image. Instead the idea is simply to make the target model misclassify by speculating the adversarial example, X, as a class, other than the original class, Y.

Researchers have discovered that although untargeted attacks are not as good as targeted attacks, they consume less time. Targeted attacks are more successful in altering the predictions of the model, but they come at a cost which is time.

Increasing threat of Adversarial Attacks and Solutions

The automated defense is a significant area. Considering code-based vulnerabilities, developers have access to a huge set of defensive tools.

Static analysis tools can assist developers in finding vulnerabilities in their code. Dynamic testing tools test an application at runtime for vulnerable patterns of behaviour. Compilers already use many of such techniques to track and patch vulnerabilities. Even one's browser is now equipped with tools to find and block possibly malicious code in client-side script.

Organisations, at the same time, have learnt to integrate tools with the right policies to enforce secure coding practices. Many companies have embraced procedures and practices to rigorously test applications for known and potential vulnerabilities before making them available to the public. For example, Google, Apple, and GitHub utilise these and other tools to vet the millions of applications and projects uploaded on their platforms.

However, tools and procedures for defending ML systems against adversarial attacks are still in the preliminary stages. And, considering the statistical nature of adversarial attacks, it's hard to advocate them using the same methods used against code-based vulnerabilities. But fortunately, there are some positive developments that can guide future steps.

The Adversarial Machine Learning Threat Matrix published by researchers of IBM, Microsoft, Nvidia, and other security and AI companies in November 2020, provides security researchers with a framework to find fragile spots and potential adversarial vulnerabilities in software ecosystems that include ML components. The report follows the ATT&CK framework, which is a known and trusted format among security researchers is addressing the growing threats in adversarial attacks in machine learning.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net