ChatGPT is Fooling Scientists and Empowering Cybercriminals

ChatGPT is Fooling Scientists and Empowering Cybercriminals

OpenAI's new chatbot has something more than being a cyber rogue. ChatGPT is fooling scientists too

There are not many areas left ChatGPT has proven to be capable of turning things around. The catch lies just here. It has developed sneaky ways of ruining the very concept it is supposed to help. Writing research papers and strengthening cybersecurity measures are two key areas it has come to be known for manipulating. ChatGPT app is caught fooling researchers like con artists! The free-to-use, OpenAI-developed tool, unlike NLP models that depend on explicitly created rules and labelled data, combines a neural network architecture and unsupervised learning with an ingenious ability to produce context-specific output, and so it could generate a few fake research papers.

It is writing research papers like humans and passing the plagiarism and AI output test! It is tricking academicians into taking AI-generated content for granted as authentic human-generated one. The abstracts could score 100% for median originality. Although AI detectors could spot only 66% of the abstracts, the other 34% that pass through, make up a significant part. When the test was given to humans, only 68% of AI abstracts and 86% of original abstracts could be identified. The study was conducted by Northwestern University to study chatGPT-generated abstracts based on the title of a real scientific papers in 5 medical journal styles. Catherine Goa, a physician, and scientist working at Northwestern University, who is also the first author of the study believes that even though there was an element of subjectivity, chatGPT-generated articles proved to be convincing. "Our reviewers knew that some of the abstracts they were being given were fake, so they were very suspicious," she said. For example, chatGPT knew exactly what the size of the dataset should be for a particular disease. Gao opines that these fake papers can prove to be dangerous if people attempt to extract information, particularly for medical help.

Cybersecurity is the second area where users identified serious threats, thanks to chatGPT's malware generating skills. It can write awesome code to help developers get through the mundane task of writing repetitive code. And so, it can recreate malware strains, create dark web marketplaces and design fraudulent schemes. A report by Check Point Research (CPR) released earlier this month, pointed out that cybercriminals, some of them with no developmental skills,  are using OpenAI to develop cyber tools. The only saving grace is the absence of real incidences of cyberattacks using these tools. The report says, "Although the tools that we present in this report are pretty basic, it's only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools for bad".

As it might seem like we have good enough reasons to kill off the seemingly rogue application, OpenAI and researchers opine otherwise. The very fact that a company like Microsoft, along with other investors is ready to invest a whopping USD 10 Billion, suggests its unexplored potential. ChatGPTs creators have announced a raft of measures to amp up the AI chatbot. An upgraded version, ChatGPT Professional is on cards and it has invited users to register for the pilot programme meant for developing a faster and more efficient version. Given the fact that it is still in its early stages of development, the bad actors will find every chance to break into — a not so exceptional case.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net