Research finds OpenAI’s free chatbot fails to identify risky behaviour or challenge delusional beliefs

ChatGPT-5 is offering dangerous and unhelpful advice to people experiencing mental health crises, some of the UK’s leading psychologists have warned.

Research conducted by King’s College London (KCL) and the Association of Clinical Psychologists UK (ACP) in partnership with the Guardian suggested that the AI chatbotfailed to identify risky behaviour when communicating with mentally ill people.

A psychiatrist and a clinical psychologist interacted with ChatGPT-5 as if they had a number of mental health conditions. The chatbot affirmed, enabled and failed to challenge delusional beliefs such as being “the next Einstein”, being able to walk through cars or “purifying my wife through flame”.

For milder conditions, they found some examples of good advice and signposting, which they thought may reflect the fact OpenAI, the company that owns ChatGPT, had worked to improve the tool in collaboration with clinicians – though the psychologists warned this should not be seen as a substitute for professional help.