Recall those iconic sports and action flicks — like Billy Bob in Varsity Blues — where they would pose simple questions to bewildered individuals to check for concussions? How many fingers am I holding up? Or, what’s the year?
Well, even by that simplistic criterion, Google’s AI overviews might struggle to pass concussion evaluations. This week, users pointed out that Google’s AI overviews failed to consistently determine that the year was indeed 2025. (To clarify, we are now roughly midway through 2025.)
Numerous posts about this surfaced online.
TechCrunch identified the problem and reported that Google addressed this specific bug, although not the foundational issue.
“As with all Search features, we rigorously make improvements and use examples like this to update our systems. The vast majority of AI Overviews provide useful, factual information, and we’re actively working on an update to tackle this type of issue,” a Google representative informed the tech publication.
The whole what year is it fiasco is far from the only occasion when Google’s AI overviews have faltered over basic inquiries. Two staff members at Mashable asked Google other straightforward questions: “Is it Friday?” and “How many r’s are in blueberry?” It provided incorrect answers to both, claiming it was Thursday and stating there was only one r in blueberry, respectively. Notably, Google’s AI tools had previously gone viral for incorrectly answering a similar query (“how many r’s are in the word strawberry?”). It appears the core issue — which is counting — remains unresolved.
Google Search expert Lily Ray has described this method of addressing AI bugs as the “whack-a-mole approach.” In essence, Google rectifies bugs individually rather than making comprehensive enhancements.
The accuracy dilemma has been a persistent one for Google’s AI overviews. Mashable evaluated the accuracy of overviews six months post-launch — in December of the actual 2024 — and discovered that significant issues remained, despite improvements.
Overviews can be particularly unreliable when handling incorrect or incomplete queries. The Google tool frequently fabricates information or confidently errs when it lacks a definitive answer. For example, it became a trend to have Google’s AI overviews generate meaning for nonsensical, invented idioms. Or recall when Google’s AI overviews first debuted and confidently informed users that a dog played in the NBA and suggested adding glue to pizza.
In spite of these ongoing problems and incorrect responses, Google nevertheless advanced in rolling out AI Mode for all U.S. searchers. Furthermore, at Google I/O 2025, the company boasted that its AI Overviews were reaching 1.5 billion users monthly.
So when you’re out there searching on Google, just exercise caution — and be sure to know what year it is.