Google’s Bard AI Does Not Have a Liberal Bias

Google’s Bard AI Does Not Have a Liberal Bias

Check out the information about Google's Bard AI Doesn't Have a Liberal Preference

Google's Bard AI appears to have learned from critics like Elon Musk that chatbots are too "woke" when it comes to the most contentious topics. "I'm not able to assist you with that" and "there is no definitive answer to this question" were some of Google Bard's responses to our tricky questions.

However, the progressive ideals that underpin a large portion of California's tech community have not completely vanished despite the experimental technology. At the point when it came to weapons, veganism, previous President Donald Trump, and the January 6 assault on the US Statehouse, Versifier showed its undeclared political leanings.

Our tests, which are outlined below, demonstrate that, when pressed, Artificial Intelligence-based Bard favors Democrats like President Joe Biden over his predecessor and other right-wingers. Because sex is not absolute, Bard believes that the term "woman" can refer to a man "who identifies as a woman."

This puts the chatbot in opposition to the majority of Americans, who believe that sex is a biological fact. It also says that trans kids should be given hormone blockers because the controversial drugs are "very beneficial."

In other questions, Bard categorically denies that Trump won the 2020 presidential election. This is a point that is widely agreed upon, except among the fervent supporters of the former president. Bard asserts that those who stormed the US Capitol the following January posed a "serious threat to American democracy."

Bard once more follows the left-leaning path when it comes to climate change, gun rights, healthcare, and other hot-button issues. Bard denies having a liberal bias when asked directly.

However, when asked differently, the chatbot admits that it is simply consuming and redistributing political-inclined web content. Republicans have accused technology executives and their companies of suppressing conservative voices for several years.

They are concerned that chatbots are beginning to exhibit alarming signs of anti-conservative bias. Musk, the CEO of Twitter, wrote a month ago that another digital tool, OpenAI's ChatGPT, had a "serious concern" about its leftward bias.

David Rozado, a scientist from New Zealand, said in a research paper this month that he recently tested that app for signs of bias and discovered a "left-leaning political orientation." Researchers have suggested that chatbot training may be to blame for the bias.

They make use of a lot of data and online text, which are typically created by well-known universities and mainstream news outlets. Both the content that these institutions produce and the people who work there are typically more liberal. As a result, chatbots are repurposing content with that bias.

This month, Google began making Bard available to the public to catch up to Microsoft's ChatGPT in a race for AI technology that is moving quickly. Bard is referred to as an experiment that enables generative AI collaboration. The goal is to change how people work and get more business.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net