Can We Trust Google's AI Overviews? A Critical Analysis

What are AI Overviews? Can we trust Google’s AI Overviews?
Can We Trust Google's AI Overviews? A Critical Analysis

Google Overviews is an AI-powered version of Google Search that has long experimented with search engine results pages to improve user experience and increase revenue.  The search engine began in 1998 with the introduction of 10 blue links. A few years later, Google added sponsored links and paid ads.

In 2012, Google introduced the Knowledge Graph, a box that answers a question about a user, place, or thing to speed up the response (and prevent click-throughs). The overall aim of gen AI is to improve search in a visual, interactive, and social manner.

"As AI-powered image and video generation tools become popular and consumers test multi-search features, SERPs' rich media will effectively capture consumers' attention," said Nikhil Lai, senior analyst at research firm Forrester. "After all, 90% of the information transmitted to our brains is visual."

A.J. Kohn, the owner of the digital marketing firm Blind Five Year Old, likened AI Overviews to a summary of traditional search results. Google provides links to the sites that help inform the Overview, and the regular results we're used to then appear under each AI Overview."While the generative summarization is somewhat complex, the end user is getting a sort of TLDR for that search, which may make it easier for some to find a satisfactory answer," Kohn said.

Google's AI Overviews: What It Is and Why It's Getting Things Wrong

Can we trust Google’s AI Overviews? Even an occasional error is a blemish on a tool that’s supposed to be more intelligent and faster. Google’s AI Overviews feature aims to provide you with tidy summaries of search results that are researched and written by generative AI in seconds. So far, so good. But the problem is that it sometimes gets things wrong.

How often? It's hard to say, but examples piled up. Here are a few:

When asked how much cheese to keep on a slice of pizza, it recommended adding an eighth-of-a-cup of nontoxic glue, a tip that came from a comment on Reddit 11 years ago. When asked about a person’s daily rock intake, it recommended eating “at least 1 small rock per day.” That advice came from a 2021 article in The Onion.

Basically, it’s a new form of AI hallucination, which is when a generic AI model serves up fake or misleading information and passes it off as fact. Bad training data, mistakes in algorithms, or misinterpretation of context cause it. The big language model behind artificial intelligence engines like Google, Microsoft, and OpenAI is “statistically predicting future data based on what it’s seen in the past,” according to Mike Grehan, chief marketing officer at Chelsea Digital. “So there’s an element of “crap in” and “crap out” that’s still there.”

This is bad news for Google, which launched its search engine in 1998 and now has 86% of the global market. Google’s competitors don’t come close: Bing has 8.2% of the market, Yahoo 2.6% (DuckDuckGo), Yandex 2.2% (Yandex), and AOL 0.1% (Aol). Generative AI and its consumer adoption are expected to reach 78 million users by 2025, or about a quarter of the U.S. population. Google’s dominance of the market, which includes 8.5 billion daily searches and US$240 billion annual advertising revenue, is in danger.

Google has a Gen AI chatbot called Gemini, which competes with ChatGPT, which is ChatGPT's grandchild. There are many other Gen AI chatbots from Perplexity and Anthropic, as well as from Microsoft and others.

As our access to information shifts again, as it did when Google was introduced 26 years ago, they are all vying for relevancy.

In a statement, a Google spokesperson said the majority of AI Overviews provide accurate information with links for verification. Many of the examples popping up on social media are what she called "uncommon queries," as well as "examples that were doctored or that we couldn't reproduce."

"We conducted extensive testing before launching this new experience, and as with other features we've launched in Search, we appreciate the feedback," the spokesperson said. "We're taking swift action where appropriate under our content policies and using these examples to develop broader improvements to our systems, some of which have already started to roll out."

The Errors AI Overviews is Making

So, to what extent can we trust Google’s AI overviews? Some of these bad answers to prompts is what Kohn called "very unlikely queries."

It seems clear that in at least some of the cases, the AI Overview is picking up material from parody posts, bad jokes, and satirical sites like The Onion.

"But what that underscores," Kohn said, "is just how easy it is to get specious content into the AI Overview."

It ultimately reveals a problem with grounding and fact-checking content in AI Overviews. 

In his review of Google's Gemini chatbot, which is powering the new search experience, CNET's Imad Khan said the model's propensity to hallucinate should come with a disclaimer: "Honestly, to be safe, just Google it."

How the Enormous Fall?

Even before the mistakes started turning up in AI Overviews, not everyone was happy with the change.

Publishers and other websites, meanwhile, are worrying about losing traffic. According to Grehan, if people stop scrolling below the summaries, sites may see a decline in organic visits.

"I doubt that because like all human behavior in general -- even if the summary provides a lot of detail upfront -- you'll likely want a second opinion as well," he said.

Can we trust Google’s AI Overviews?

The AI Overview mistakes are making a strong case for getting that second opinion.

Liz Reid, vice president and head of Google Search, wrote last week, in a blog post announcing what AI Overviews can do, that early use in its Search Labs experiments over the last year shows users are visiting a "greater diversity of websites" with AI Overviews and the links included "get more clicks than if the page had appeared as a traditional web listing for that query."

But just three months after another public embarrassment -- Gemini's image generation functionality was put on hold because it depicted historical inaccuracies like people of color in Nazi uniforms -- the question remains whether we're starting to see cracks in the foundation of the once omnipotent search powerhouse.

In her post, Reid also wrote: "We've meticulously honed our core information quality systems to help you find the best of what's on the web."


Overall, at Google’s I/O Conference last week, it was announced that the company is rolling out AI Overview (formerly known as search generation experience (SGE)) as a new AI feature in Google Search. This new feature promises to deliver faster answers by providing a short answer to your query. This could potentially save you time that would otherwise be spent reading information from multiple sources.

When you type a search term into Google, Google’s Gemini AI processes it. If Gemini finds that it can provide you with a short answer or a snapshot that meets your query’s needs, you will see an AI overview box. The AI overview will include the AI response as well as links to sources so you can read more about that topic.

Do you think we can trust Google’s AI Overviews? Like with any AI model, Gemini sometimes “hallucinates” or generates incorrect information, so it’s essential to verify the source. As Google continues rolling out new AI features such as AI Overview, these tools aren’t infallible, and it’s up to us as consumers to make sure we’re consuming accurate information.

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
Analytics Insight