Google has removed some AI health summaries. But why? What is the company saying? – Firstpost

Google has removed some AI health summaries. But why? What is the company saying? – Firstpost

Google has removed artificial intelligence (AI) health summaries.

The development comes after an investigation found that people were being given harmful and misleading information by AI for some searches. Google is the world’s biggest search engine, with a market share of around 91 per cent.

The health overviews are powered by its Gemini AI, which is also its main large-language model (LLM).

But what happened? What do we know?

How health summaries work

First, let’s examine how health summaries work.

According to Google, its health summaries are generated by the company’s Med-Gemini models. These rely on advanced AI to generate reports from complex medical data into accurate reports.

STORY CONTINUES BELOW THIS AD

It relies on Med-Gemini-M 1.5 to comprehend vast amounts of information, including entire patient health records or multiple research papers. This allows the AI to identify facts and generate comprehensive summaries.

More from ExplainersWhy are Google founders saying bye to California, where they made their fortune? Why are Google founders saying bye to California, where they made their fortune? Are Americans ditching pizza for Mexican food? Are Americans ditching pizza for Mexican food?

The company claims Med-Gemini is tuned to optimise medical note summarisation and clinical referral letter generation and that it even does better than human experts on occasion.

The models also collate information from medical images, videos, and biomedical signals.

The models also rely on self-training, integration, and access to the most up-to-date information to reach their conclusions. The company says Gemini can be melded with patient applications to investigate medical reports and generate insights in a user-friendly format, allowing patients to better comprehend their reports.

What happened?

Google has removed some of its AI summaries following an investigation by the _Guardian_ newspaper.

While the company has claimed that its AI overviews, which rely on generative AI to summarise topics, are “helpful” and “reliable”, the investigation found several summaries produced inaccurate information.

Editor’s Picks1Grokipedia launched: Can Elon Musk’s AI-powered encyclopaedia compete with Wikipedia?Grokipedia launched: Can Elon Musk’s AI-powered encyclopaedia compete with Wikipedia? 2Is the internet dead? What's this theory that OpenAI's Sam Altman says might be true?Is the internet dead? What's this theory that OpenAI's Sam Altman says might be true?

Google, in one case, provided false information about some liver function tests that experts remarked could have left those with serious issues with the impression that they were in fine health.

According to the probe, typing “what is the normal range for liver blood tests” resulted in a lot of numbers, precious little context, and no consideration of factors such as nationality, sex, ethnicity, or age of patients.

STORY CONTINUES BELOW THIS ADThe Gemini app icon on a smartphone in this illustration taken October 27, 2025. REUTERS/Dado Ruvic/IllustrationThe Gemini app icon on a smartphone in this illustration taken October 27, 2025. REUTERS/Dado Ruvic/Illustration

Experts noted what Google’s AI Overviews categorised as normal were completely different from results that really are normal. It stated this could result in patients falsely thinking that the results were normal and neglecting to tend to their health, including going to follow-up meetings with their doctors.

The company has now removed AI Overviews for the search items: “what is the normal range for liver blood tests” and “what is the normal range for liver function tests”.

Some are pleased by the developments.

Vanessa Hebditch, the Director of Communications and Policy at charity British Liver Trust, told the newspaper, “This is excellent updates, and we’re pleased to see the removal of the Google AI Overviews in these instances.”

But she also flagged the risk to patients from using AI for such matters.

“However, if the question is asked in a different way, a potentially misleading AI Overview may still be given, and we remain concerned other AI-produced health information can be inaccurate and confusing.”

STORY CONTINUES BELOW THIS AD

The newspaper also found that typing slight variations of the original queries into Google, such as “lft reference range” or “lft test reference range”, resulted in AI Overviews. That was a huge worry, Hebditch added.

“A liver function test, or LFT, is a collection of different blood tests. Understanding the results and what to do next is complex and involves a lot more than comparing a set of numbers. But the AI Overviews present a list of tests in bold, making it very easy for readers to miss that these numbers might not even be the right ones for their test. In addition, the AI Overviews fail to warn that someone can get normal results for these tests when they have serious liver disease and need further medical care. This false reassurance could be very harmful.”

What is Google saying?

A company spokesperson told The Independent, “We do not comment on individual removals within Search. In cases where AI Overviews miss some context, we work to make broad improvements, and we also take action under our policies where appropriate.”

“Our internal gruppe of clinicians reviewed what’s been shared with us and found that in many instances, the information was not inaccurate and was also supported by high-quality websites.”

STORY CONTINUES BELOW THIS AD

OpenAI, Google, and Perplexity are in an unprecedented fight for artificial intelligence users in India, rolling out freebies in a strategy seen as a way to harvest troves of multilingual training data.

India is the second-biggest smartphone bazaar, with 730 million devices. On average, Indians consume 21 gigabytes of data each month, paying 9.2 cents per gigabyte — one of the world’s lowest mobile data rates.

To lure price-conscious users, Google in November started giving its $400 (Rs. 36,076) Gemini AI Pro subscription for free for 18 months to 500 million customers of Reliance Jio, India’s biggest telecom player. Last week, it added India to dozens of countries where it is offering its heavily discounted “AI Plus” package.

OpenAI has also made its ChatGPT Go plan, which offers extended but not unlimited usage compared with existing plans, free for a year. The plan incurs charges in more than 100 countries and was $54 (Rs. 4,870) in India before being made free to everyone in the country in November.

STORY CONTINUES BELOW THIS AD

Just like Google’s AI Pro, the free package is only available in India.

Early download data suggests a jump in usage due to the free plans, with daily active users of ChatGPT in India surging 607 per cent year-on-year to 73 million as of last week — more than double the number in the US — according to data from industry intelligence firm Sensor Tower compiled for Reuters.

Gemini’s daily users in India rose 15 per cent from when it launched the Reliance Jio offer in November to touch 17 million last week, compared to 3 million in the US, the data showed.

Such adoption has made India the biggest exchange by daily users for both AI chatbots, Sensor Tower noted. Perplexity, meanwhile, has made its Pro tool, priced at $200 (Rs. 18,038) a year globally, free for a year for users of Indian telecom company Airtel. It says the plan gives unlimited access to its most advanced research tools.

STORY CONTINUES BELOW THIS AD

India now accounts for more than a third of Perplexity’s global daily active users, up from just 7 per cent last year, Sensor Tower data showed.

With inputs from agencies

Follow Firstpost on Google. Get insightfulexplainers, sharpopinions, and in-depthlatest headlines on everything from geopolitics and diplomacy toWorld News. Stay informed with the latest perspectives only on Firstpost.

Tagsartificial intelligence (AI) GoogleHomeExplainersWhy Google has removed AI health summariesEnd of Article

View Original Source