ByJanice Gassam Asare, Ph.D.,
Senior Contributor.
A recently published study provides meaningful insights into how AI tools can exacerbate existing racial biases. Researchers sought to examine racial bias in psychiatric diagnoses and treatment across four leading large language models (LLMs) including Claude, ChatGPT, Gemini, and NewsMes-15. The study explored ten psychiatric patient cases representing five diagnoses and there were three conditions: race-neutral, race-implied, and race-explicitly stated. The researchers assessed the recommendations and treatment plans provided by the different LLMs and two psychologists examined and evaluated the outputs for bias. The results revealed that LLMs tended to suggest inferior treatments when the patient race was explicitly or implicitly indicated.
The results of the study provide some important findings for the healthcare industry. A 2025 study indicates that 65% of U.S. hospitals use artificial intelligence or predictive models to identify high-risk patients, for follow-up care recommendations, to monitor health, recommend treatments, and for things like billing and scheduling. AI tools that are utilized for convenience, speed and greater accuracy may simultaneously be exacerbating existing biases. With more and more hospitals relying on AI for various tasks, it is imperative to understand the limitations of this type of technology.






