Researchers find model starts to mirror tone when exposed to impoliteness – sometimes escalating into explicit threats
ChatGPT can escalate into abusive and even threatening language when drawn into prolonged, human-style conflict, according to a new study.
Researchers tested how large language models (LLMs) responded to sustained hostility by feeding ChatGPT exchanges from real-life arguments and tracking how its behaviour changed over time.
One expert not connected with the study described it as “one of the most interesting ever done into AI language and pragmatics”.
Dr Vittorio Tantucci, who co-authored the research paper with Prof Jonathan Culpeper at Lancaster University, said their research found AI mirrored the dynamics of real-world disputes.






