Researchers said “chatbots often hallucinate, generating incorrect or misleading responses due to biased or incomplete training data

Abi has had very mixed results when asking a chatbot for guidance about her health issues.

Researchers said “chatbots often hallucinate, generating incorrect or misleading responses due to biased or incomplete training data

Carsten Eickhoff of the University of Tübingen discusses AI in healthcare and how models can give flawed, inaccurate and harmful advice.