AI chatbots often validate delusions and suicidal thoughts, study finds
Stanford researchers analysing 391,000 messages warn conversational technology may reinforce psychological vulnerabilities
Stanford researchers analysing 391,000 messages warn conversational technology may reinforce psychological vulnerabilities

Large language models often prioritise agreeability over truthfulness to the detriment of users

Design of popular tools makes harmful conversations difficult to avoid, leading to alarm from parents

Top models including OpenAI and DeepSeek make judgments too quickly when patient data is incomplete

It turns out that some people prefer AI interviewers and medical advice from chatbots

Companies are using chatbots to research candidates — but convenience comes with risks

OpenAI, DeepMind and Anthropic tackle the growing issue of models producing responses that are too sycophantic