Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn't arrive within 3 minutes, check your spam folder.

Ok, Thanks

Google addressed last week's AI Overviews oddities

Google has outlined the steps it took to improve the AI Overviews feature, including better detection of nonsensical queries, reducing satirical content, restricting advice from forums, adding safeguards for sensitive topics, and quickly addressing issues.

Ellie Ramirez-Camara profile image
by Ellie Ramirez-Camara
Google addressed last week's AI Overviews oddities
Photo by Nathana Rebouças / Unsplash

Google's AI Overviews feature has barely come out of Search Labs to general availability in the US, and it is already making waves thanks to a series of very odd results that were widely circulated on social media last week. The situation was also aggravated by the widespread circulation of some faked AI Overviews results, making the issue severe enough that Google published a blog post explaining the bizarre and inaccurate results and detailing the additional measures the company set in place to ensure AI Overviews remain accurate and trustworthy.

Google's AI Overviews are designed to process queries that are more complex than standard Google Search queries in that, for example, they are based on questions that would require multiple standard Search queries to reach a satisfactory answer. AI Overviews are also meant to display and organize resources that will enable users to learn more about the answers to their queries. This process does leverage a custom language model but unlike open-ended chatbots, the AI Overviews model is not simply generating an output. Rather, the model is instructed to identify relevant, high-quality results from Google's index, and to show information backed by highly ranked indexed results exclusively. This approach is mostly enough to ensure that the model won't hallucinate and will remain highly accurate.

As is well known by now, no feature involving open-ended queries can be tested against all possible outcomes. Thus, while Google reports AI Overviews was tested extensively with an emphasis on typical user queries, the unexpected AI Overviews results highlighted two areas of improvement, namely, the model's ability to process nonsensical or satirical content, as in the possible responses to the question "How many rocks should I eat?" or even "How do I get cheese to stick to pizza?". Google reported it also identified a small number of cases where AI Overviews misinterpreted the language on webpages, causing it to present inaccurate information.

In response to these issues, Google has already rolled out over a dozen technical improvements to detect and filter out problematic Overviews. This includes better nonsense query handling, reducing satirical content, restricting advice from user forums, and additional safeguards for sensitive areas like news and health. Additionally, Google claims it is actively taking action to identify AI Overviews that violate its content policies. According to the company, these are less than one in every 7 million unique queries displaying AI Overviews. Google's statement concludes by pointing out that no system is bound to be perfect because of the sheer scale of the internet. However, the company is committed to applying its 25 years of experience to deliver an increasingly useful and trustworthy search experience.

Ellie Ramirez-Camara profile image
by Ellie Ramirez-Camara
Updated

Data Phoenix Digest

Subscribe to the weekly digest with a summary of the top research papers, articles, news, and our community events, to keep track of trends and grow in the Data & AI world!

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More