Experts referred to the findings as “concerning” and “hazardous.”
Experts referred to the findings as “concerning” and “hazardous.”


Earlier this month, The Guardian released a probe revealing that Google was presenting misleading and outright false information through its AI summaries in reply to certain medical questions. Now those summaries seem to have been eliminated. According to the initial report:
In one instance labeled as “particularly hazardous” by experts, Google incorrectly advised individuals with pancreatic cancer to steer clear of high-fat diets. Experts indicated this was precisely the opposite of what should be advised, potentially increasing the chances of patients succumbing to the illness.
In another “concerning” case, the firm supplied incorrect information regarding vital liver function assessments, which could leave those with severe liver ailments mistakenly believing they are in good health.
As of this morning, the AI summaries for inquiries such as “what constitutes a normal spectrum for liver blood tests?” have been totally disabled. Google opted not to comment on the specific removal to The Guardian. Spokesman Davis Thompson informed The Verge that, “We allocate significant resources to the quality of AI Overviews, particularly regarding health topics, and the vast majority deliver accurate content. Our internal clinical team reviewed what’s been provided to us and discovered that in several cases, the information was not erroneous and was also backed by reputable websites. In situations where AI Overviews lack contextual information, we strive to implement comprehensive improvements, and we also take necessary actions under our regulations where applicable.”
However, this is merely another controversy concerning a feature that has advised users to apply glue to pizza, consume stones, and has faced numerous lawsuits.
Update January 11th: Included feedback from Google.