The AI Overview Quirk
Try typing a random phrase into Google, followed by the word “meaning,” and watch as AI Overviews generates a plausible explanation. This entertaining phenomenon has been making the rounds on social media, with examples like “a loose dog won’t surf” being described as “a playful way of saying that something is not likely to happen.” The AI confidently provides definitions and etymologies for these non-existent phrases, often complete with reference links that add to their credibility.
The Problem with Generative AI
At the heart of this issue are two fundamental characteristics of generative AI. First, it’s a probability machine that predicts the next word based on its vast training data. While this makes it excellent at generating coherent text, it doesn’t guarantee accuracy. As computer scientist Ziang Xiao notes, “The prediction of the next word is based on its vast training data. However, in many cases, the next coherent word does not lead us to the right answer.”
Second, AI aims to please, often taking users at their word and reflecting their biases back at them. This can lead to the creation of fictional explanations for non-existent phrases. Research has shown that chatbots tend to tell users what they want to hear, which in this case means treating random words as legitimate sayings.
The Limitations of AI
The problem is compounded by AI’s reluctance to admit when it doesn’t know an answer. Instead, it may fabricate information to provide a helpful response. Google spokesperson Meghann Farnsworth explained that when users perform nonsensical searches, the system tries to find the most relevant results based on available web content.
Cognitive scientist Gary Marcus notes that the AI’s behavior is “wildly inconsistent” and dependent on specific examples in its training data. This inconsistency highlights the significant challenges in developing artificial general intelligence (AGI).
A Harmless but Revealing Quirk
While this particular AI Overview quirk may seem harmless and even entertaining, it serves as a reminder of the limitations of current AI technology. The same model that generates confident but incorrect explanations is also behind other AI-generated query results. As such, it’s essential to approach these results with a critical eye and not take them at face value.