Exclusive: Google fails to include safety warnings when users are first presented with AI-generated medical advice

Google is putting people at risk of harm by downplaying safety warnings that its AI-generated medical advice may be wrong.

When answering queries about sensitive topics such as health, the company says its AI Overviews, which appear above search results, prompt users to seek professional help, rather than relying solely on its summaries. “AI Overviews will inform people when it’s important to seek out expert advice or to verify the information presented,” Google has said.

But the Guardian found the company does not include any such disclaimers when users are first presented with medical advice.

Google only issues a warning if users choose to request additional health information and click on a button called “Show more”. Even then, safety labels only appear below all of the extra medical advice assembled using generative AI, and in a smaller, lighter font.