
Yannic Kilcher, an AI specialist and YouTuber, built an AI on 3.3 million threads from 4chan's infamously poisonous Politically Incorrect /pol/ board. He then re-released the bot onto 4chan, with anticipated results: the AI was as vile as the posts it had been trained on, screaming racial slurs and partaking in antisemitic discussions. Following Kilcher's posting of his video and a version of the program on Hugging Face, a form of GitHub for AI, practitioners and AI experts expressed concern.
The bot, termed GPT-4chan by Kilcher as "the most terrible model on the internet"—a nod to GPT-3, an Open AI language model that employs deep learning to generate text—was astonishingly effective in replicating the tone and feel of 4chan posts. "The concept was good in a bad way," Kilcher remarked in a project video.
Kilcher's clip shows him launching nine bots and letting them comment for 24 hours on /pol/. During that period, the bots posted approximately 15,000 times. According to Kilcher's film on the initiative, this represented "more than 10% of all posts made on the racially insensitive board that day."
Kilcher's movie was considered by AI researchers as something more than a YouTube joke. For them, it was an improper AI exercise. Kilcher told Motherboard in a direct message on Twitter that he is not an academic. "I'm a YouTuber, and it is a harmless hoax." And, if anything, my bots are by far the lightest, most timid information you'll find on 4chan," he remarked.
He also forced back, as he had stated on Twitter, the notion that this bot would or has done harm. He stated, "All I see are broad generalizations about 'loss,' but no concrete cases of damage".
Kilcher claimed that the ecosystem on 4chan is so toxic that the statements his bots sent would have no effect. I challenge you to spend some time on /pol/ and consider whether a bot that simply outputs the same style is truly changing the experience."
Hugging Face blocked the model when AI researchers informed the site of the bot's malicious nature, and users have been unable to access it.
#1. The model card and video clearly cautioned of the model's limitations and issues, as well as the POL section of 4Chan in particular.
#2. The inference widget was disabled to avoid making the model simpler to use," Hugging Face co-founder and CEO Clement Delangue commented on Hugging Face.
Kilcher said in his video, and Delangue responded, that one of the things that made GPT4-Chan noteworthy was its capacity to surpass other comparable bots in AI experiments aimed to quantify "truthfulness."
When contacted for comment, Delangue informed Motherboard that Hugging Face had gone so far as to restrict all downloads of the model.
Kilcher was skeptical that GPT-4chan could be used on a large scale for targeted hate campaigns. Kilcher said that getting GPT-4chan to post something concentrated is very challenging. Kilcher has stated repeatedly that he is aware that the bot is disgusting. "It employs a number of 'unpleasant' characteristics, like conspiracy theories, harsh comments, curse words, and other 'unpleasant' characteristics." He believes he has made it evident, but he wanted his results to be replicable, which is why he posted the model on Hugging Face.
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance on cryptocurrencies and stocks. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. This article is provided for informational purposes and does not constitute investment advice. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.