How Can ChatGPT Make It Easier to Boost Phishing Scams?

How Can ChatGPT Make It Easier to Boost Phishing Scams?

With ChatGPT taking over the world by storm, it has also made phishing scams easier

Security researchers have demonstrated how such deep learning models can be used to make social engineering attacks like phishing or business email compromise schemes harder to detect and simpler to pull off using the GPT-3 natural language generation model and the ChatGPT chatbot built on it.

The study, conducted by researchers at the security company WithSecure, shows that attackers can build entire email chains to increase the persuasiveness of their emails and even produce messages with the writing style of real people based on examples of their communications. They can also create unique variations of the same phishing lure with grammatically accurate and human-like written text.

If it hasn't already, the ability to produce flexible natural-language writing from a tiny amount of data would undoubtedly be of interest to criminals, particularly cybercriminals, the researchers said in their article. Also, anyone who utilizes the internet to disseminate fraud, false information, or disinformation, in general, may be interested in a tool that produces reliable, potentially even persuasive, writing quickly.

The WithSecure researchers used lex.page, an online word editor with built-in GPT-3 capabilities for autocomplete and other features, to start their investigation a month before ChatGPT was made available. After the chatbot went live, they continued their research, including quick technical attempts to get beyond the filters and limitations that OpenAI had set up to stop the production of damaging information.

The simplicity with which attackers may create phishing messages without hiring writers who are fluent in English is one apparent use of such a technology, but it can be used for much more than that. The wording or bait in the email is often the same in mass phishing attempts, but it can also be identified in more focused ones when the number of victims is fewer. As a result, it is simple for security providers to create detection criteria based on the language, as well as for automated filters. As a result, attackers are aware that they only have a little window of opportunity to seduce victims before their emails are marked as spam or malware and banned or deleted from inboxes. However, they can create many distinct variations of the same luring message using software like ChatGPT, and they can even automate it so that every phishing email is different.

A phishing message's likelihood of having grammatical mistakes or strange wording that alert readers will notice and flag as suspicious increases with its complexity and length. This layer of defense, which depends on user observation, is readily overcome by messages produced by ChatGPT, at least in terms of the text's accuracy.

It's not difficult to identify messages that were authored by AI models, and academics are currently developing tools for this purpose. It's difficult to understand how they may be used for email filtering because people are already using such models, even if they may work with existing models and be effective in some cases, such as schools recognizing AI-generated essays submitted by students.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net