Google Search AI at 6 Months: Has the Function Enhanced?


### Six Months In: Google’s AI Overviews Remain a Developing Endeavor

It has been six months since **Google** commenced incorporating **AI-generated text** into many search results by default. This feature, designated as an “experiment” in a disclaimer at the bottom of each AI Overview, has encountered its fair share of hurdles, as acknowledged by Google to *Mashable*.

While Google’s Senior Director of Product Management for Search, **Hema Budaraju**, characterized AI Overviews as “engaging” and “beneficial” for users, she also admitted that there is considerable opportunity for enhancement. “We need to improve the quality aspect, which is an ever-increasing necessity,” Budaraju remarked.

### A Rough Beginning and a Retreat

AI Overviews debuted with the slogan, “[Let Google do the searching for you](https://blog.google/products/search/generative-ai-google-search-may-2024/),” yet the launch was fraught with controversy. Initially, the feature garnered attention for strange and erroneous replies, such as recommending that individuals consume rocks or apply glue to pizza. In light of this backlash, Google reduced the prominence of AI Overviews. Initially present in nearly 15% of search results, their visibility plummeted to about 7% by the end of June, as reported by [Search Engine Land](https://searchengineland.com/google-ai-overviews-visibility-new-low-444048).

### Has There Been Progress in Quality?

Half a year on, the question persists: are AI Overviews showing improvement? Though they are appearing less often, mistakes continue to be prevalent. Nonetheless, there are some indicators of progress. For instance, the AI Overviews for queries emphasized by *Mashable* during the reporting phase were revised and enhanced.

Budaraju asserts that AI Overviews function well for inquiries lacking a single, conclusive answer, providing “various viewpoints.” This evaluation is grounded in internal data from A/B testing, though not based on focus group feedback.

For simple searches, AI Overviews often perform satisfactorily. For example, a query like “How do almonds taste?” might produce a reasonable answer: “Almonds can have a sweet, slightly bitter, or bitter taste, depending on their chemical makeup.” However, for more intricate or niche queries, the system still faces challenges.

### Ongoing Mistakes and Odd Responses

For users depending on Google Search for more detailed or obscure information, errors continue to be a prevalent problem. A notable incident involved a **BlueSky** user searching for the *Twin Peaks* episode in which Gordon Cole kisses Shelly. The AI Overview confidently but inaccurately stated no such scene exists, despite it being a well-known event in the series. This mistake likely arises from the AI misinterpreting its training data, which presumably includes references to the scene.

Budaraju clarified that such “hallucinations” are more likely with rare queries. She attributed this to either insufficient high-quality information on the internet or the AI’s misinterpretations. However, in instances like the *Twin Peaks* case, where reliable information is accessible, the AI’s inability to deliver precise results is harder to defend.

### When AI Overviews Amplify Misinformation

One of the most troubling issues with AI Overviews is their potential to exacerbate misinformation. If a user begins with incorrect assumptions, the AI Overview can reinforce or amplify those inaccuracies. For example, a query like “How to use baking soda to thicken soup” might yield an AI-generated response asserting that baking soda can render soup “silkier and smoother.” This is misleading and could result in culinary mishaps.

Upon presenting this example, a Google spokesperson assured *Mashable* that the feedback would contribute to product improvements. However, the situation becomes even more complicated when searches venture into pseudoscience or the paranormal. For instance, a query regarding teaching a dog to communicate telepathically produced an AI Overview with suggestions from paranormal enthusiasts, implying that such information holds credibility.

Google representatives noted that AI Overviews are dynamic and subject to change over time. For the telepathic dog inquiry, the AI Overview was subsequently updated to include a disclaimer about the absence of scientific evidence before moving into similar paranormal advice. While this change is a step forward, it still raises concerns about the feature’s capability to differentiate between trustworthy and questionable information.

### Confusion Over Context: The Dolphin Meat Inquiry

Another illustration of AI confusion arose from a query regarding the name for dolphin meat. The AI Overview initially claimed, “The name for dolphin meat depends on the region and the type of dolphin,” and listed “mahi-mahi” as an instance. While mahi-mahi is also recognized as dolphinfish, it is not actually dolphin. This answer, albeit technically defensible, can be misleading and underscores the AI’s difficulty in delivering clear and precise context.

When questioned about this, a Google representative suggested that the AI might have