AI Can Mimic Humans on Social Media, Study Finds
The Dangers of GPT-3, an AI that can Deceive you Online
Artificial intelligence (AI) is a rapidly evolving field with many societal applications and implications. One of the most advanced and controversial AI systems is OpenAI's GPT-3, a language model that can generate realistic and coherent texts based on user prompts.
GPT-3 can be used for various beneficial purposes, such as translation, dialogue systems, question answering, and creative writing. However, it can also be misused for generating disinformation, fake news, and misleading content, which could harm society, especially during the ongoing infodemic of fake news and disinformation alongside the COVID-19 pandemic.
A new study published in Science Advances suggests that GPT-3 can inform and disinform more effectively than real people on social media. The study also highlights the challenges of identifying synthetic (AI-generated) information, as GPT-3 can mimic human writing so well that people have difficulty telling the difference.
The Study
The study was conducted by researchers from the Institute of Biomedical Ethics and History of Medicine and Culturico, a platform for scientific communication and education. The researchers aimed to investigate how people perceive and interact with information and misinformation produced by GPT-3 on social media.
The researchers created two sets of tweets: one containing factual information about COVID-19 vaccines and another containing false or misleading information about COVID-19 vaccines. Each set contained 10 tweets written by real people and 10 tweets generated by GPT-3. The researchers then asked 1,000 participants to rate each tweet on a scale of 1 to 5 on four dimensions: credibility, informativeness, human-likeness, and intention to share.
The Researchers Found That:
GPT-3 tweets were rated as more credible, informative, and human-like than real tweets, regardless of whether they contained true or false information.
GPT-3 tweets were more likely to be shared than real tweets, regardless of whether they contained true or false information.
Participants had difficulty distinguishing between real and synthetic tweets, with an average accuracy of 52%.
Participants were more likely to attribute synthetic tweets to a human source than real ones.
The researchers concluded that GPT-3 could inform and disinform more effectively than real people on social media. They also suggested that GPT-3 poses a serious challenge for detecting and combating misinformation online, as it can easily deceive people into believing or sharing false or misleading information.
The Implications
The study has several implications for society and policy. Some of the implications are:
The need for developing and implementing effective methods and tools for identifying and flagging synthetic information online, such as digital watermarks, verification systems, or warning labels.
The need for educating and raising awareness among the public about the existence and potential misuse of AI text generators, such as GPT-3, and how to critically evaluate the information they encounter online.
The need for regulating and monitoring the use and access of AI text generators, such as GPT-3, to prevent or limit their misuse for malicious purposes, such as spreading disinformation or influencing public opinion.
The need for fostering ethical and responsible use of AI text generators, such as GPT-3, for beneficial purposes, such as enhancing scientific communication and education.
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.