Should Human Writers be Worried about OpenAI’s GPT-3?

Should Human Writers be Worried about OpenAI’s GPT-3?

by September 11, 2020

Artificial IntelligenceLooking into how an AI system has divided social media users over its abilities

Since the announcement of OpenAI’s Generative Pretrained Transformer, GPT-3, there have been dozens of articles about this system and its capabilities. According to OpenAI’s blog post, this is not like most AI systems that are designed for one use-case. Instead, this API provides a general-purpose “text in, text out” interface, allowing users to try it on virtually any English language task. Since GPT-3 wrote an article in the esteemed columns of the Guardian, the hype for this technology grew multifold. On Twitter and other socials, people started sharing how GPT-3 could also autocomplete code or fill in blanks in spreadsheets.

The GPT-3 program has taken years of development, training on a huge amount of text that it’s mined for statistical regularities from the Common Crawl dataset. These regularities are stored as billions of weighted connections between the different nodes in GPT-3’s neural network. Without any human intervention, the program looks and finds patterns. Later it uses these patterns to complete text prompts. If one inputs the word “fire” into GPT-3, the program knows, based on the weights in its network, that the words “truck” and “alarm” are much more likely to follow than “lucid” or “elvish. Basically, it is a transformer-based neural network that formed the foundation for the popular NLP model BERT and GPT-3’s predecessor, GPT-2. It is also important to note Common Crawl makes up just 60% of GPT-3’s training data; since OpenAI researchers also fed in other curated sources such as Wikipedia and the full text of historically relevant books.

The first GPT, released in 2018, contained 117 million parameters, these being the weights of the connections between the network’s nodes, and a good proxy for the model’s complexity. A parameter is a computation in a neural system that applies an extraordinary or lesser weighting to some part of the information, to give that aspect greater or lesser importance in the general estimation of the data. Then came GPT-2, in 2019, containing 1.5 billion parameters. And now, by comparison, GPT-3 has 175 billion parameters. Following GPT-3, in second place, is Microsoft Corp’s Turing-NLG algorithm, which has 17 billion learning parameters.

Other than autocompleting sentences and paragraphs, GPT-3 has numerous applications. For instance, it can be used as a question based search engine, create conversations between people who have never met (whether in current time or different eras), make original music, and more. It can also finish emails, turning the tone from informal to formal, or even write school essays in any format. It can be used to create text-based games or resumes, or generate code, layout for a described text too. Moreover, it can also autocomplete images if enough data is provided. One can request it to write simpler versions of complicated instructions or draft excessively detailed instructions for simple tasks. GPT-3 has also been used to make mockup websites, write podcasts and tweets and make memes, financial statements.

Though this feels like a major advancement, some experts feel otherwise. Like any AI model, GPT-3 has its flaws too. The main fear is that this system has the ability to output toxic language that propagates harmful biases easily. Also, if not designed properly, it can give poor-quality answers. Many believe that it is an overhyped achievement. Science researcher and writer Martin Robbins reveals that GPT-3 is overhyped. He cites that all it did was “cutting lines out of my last few dozen spam emails, pasting them together, and claiming the spammers composed Hamlet,” while Mozilla fellow Daniel Leufer called it “an absolute joke.”

“It would have been actually interesting to see the eight essays (posted in the Guardian) the system actually produced, but editing and splicing them like this does nothing but contribute to hype and misinform people who aren’t going to read the fine print,” Leufer tweeted. Another person Paul Katsen, Twitter employee, tweeted that though GPT-3 fills up a spreadsheet with the info, i.e., the population of Michigan is 10.3 million, Alaska became a state in 1906, in reality, the population of Michigan has never been 10.3 million, and Alaska became a state in 1959. MIT Technology Review, too conducted an experiment using OpenAI’s invention, only to warn its reader to refrain from using it. The article author further mentions that OpenAI’s striking lack of openness seems to us to be a serious breach of scientific ethics and a distortion of the goals of the associated nonprofit.

This article is written by a human, not AI.