The AI Search Era Arrives
Google I/O 2025 marked a significant milestone in the era of AI Search, with the tech giant focusing heavily on artificial intelligence advancements. The conference unveiled several key developments: a new AI video generation tool called Flow, a $250 AI Ultra subscription plan, numerous updates to Gemini, a virtual shopping try-on feature, and the widespread launch of the AI Mode search tool in the United States.

Notably, despite nearly two hours of discussion about AI among Google leaders, the term “hallucination” was conspicuously absent. Hallucinations refer to the invented facts and inaccuracies that large language models present in their responses. According to major AI brands’ own metrics, hallucinations are becoming more prevalent, with some models hallucinating over 40% of the time.
The Hallucination Problem
The issue of hallucinations remains one of the most stubborn and concerning problems with AI models. While Google’s presentation suggested that models like Gemini never hallucinate, the reality is different. Every Google AI Overview comes with a warning: “AI responses may include mistakes.” The closest acknowledgment of this issue came during a discussion about AI Mode and Gemini’s Deep Search capabilities, where it was mentioned that the model would check its own work before delivering an answer. However, without further details, this process seems more like a theoretical solution than a genuine fact-checking mechanism.
AI skeptics argue that Silicon Valley’s confidence in these tools appears disconnected from actual results. Real users notice when AI tools fail at simple tasks like counting, spellchecking, or answering basic questions. For instance, Gemini 2.5 Pro, Google’s most advanced AI model, scores only 52.9% on the Functionality SimpleQA benchmarking test, which evaluates language models’ ability to answer short, fact-seeking questions.
Google’s Response
When questioned about the SimpleQA benchmark and hallucinations, a Google representative pointed to the company’s official Explainer on AI Mode and AI Overviews. The explainer acknowledges that AI Mode may sometimes present inaccurate information confidently, a phenomenon known as ‘hallucination.’ It also mentions novel approaches, such as agentic reinforcement learning, to improve factuality by rewarding the model for generating accurate statements backed by inputs.
The Path Forward
While it’s possible that hallucinations may be solvable in the future, current research suggests that they remain a significant challenge for large language models. Despite this, companies like Google and OpenAI are pushing forward with AI Search, which is likely to be an error-prone era unless significant breakthroughs occur. As the tech industry moves forward, the gap between the confidence in AI tools and their actual performance remains a critical issue to address.