Meta Testing Its Latest Chatbot with The Public

Meta Testing Its Latest Chatbot with The Public

BlenderBot 3 can converse informally and respond to the kinds of questions you might ask a virtual assistant

In order to gather input on the system's functionality, Meta's AI research laboratories have developed a brand-new, cutting-edge chatbot and are enabling members of the public to communicate with it.

The chatbot is available online as BlenderBot 3. According to Meta, BlenderBot 3 can converse informally and respond to the kinds of questions you might ask a virtual assistant, such as "talking about healthy food recipes or discovering kid-friendly services in the city."

The prototype bot is based on Meta's prior work with massive language models, or LLMs, which are capable but unreliable text-generation programs, the most well-known of which is OpenAI's GPT-3. Like other LLMs, BlenderBot undergoes extensive training on massive text datasets, which it then mines for analytical patterns to produce language. Such systems have proven to be remarkably flexible and have been used for a multitude of activities, such as assisting authors in writing their upcoming best-selling books and producing code for programmers. These models do, however, have serious flaws: they frequently generate user-generated answers and repeat errors in their training data, which is a big problem if they are to function as useful digital assistants.

Meta is particularly interested in using BlenderBot to examine this latter issue. A crucial element of the chatbot is its capacity to look up specific topics online. More significantly, people can click on the answers to see where the information came from by doing so. In other words, BlenderBot 3 can list its references.

By making the chatbot accessible to a larger audience, Meta aims to receive input regarding the various problems that large language models encounter. BlenderBot users can report any suspicious responses from the system, and Meta claims to have made every effort to "minimize the bots' use of foul language, insults, and culturally insensitive comments."

For IT businesses, making prototype AI chatbots available to the general public has traditionally been considered a risky step. Microsoft published Tay, a chatbot on Twitter, in 2016. Tay was able to learn from the people it interacted with. As was somewhat expected, Tay soon came under pressure from Twitter users to repeat a variety of racist, antisemitic, and misogynistic sentiments. Less than 24 hours after being notified, Microsoft reacted by pulling the bot offline.

Since Tay's malfunction, the field of AI has undergone significant development, according to Meta, and BlenderBot includes a variety of safety features that should prevent Meta from making the same mistakes Microsoft did.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net