A comparison between hypnosis and AI shows why fluent systems can still lack understanding – and how future models may overcome this gap.
A poster for Artificial intelligence and visual computing. AI and hypnotised human subjects show striking similarities. Photo: Ecole polytechnique, CC BY-SA 2.0
A comparison between hypnosis and AI shows why fluent systems can still lack understanding – and how future models may overcome this gap.
A new review published in Cyberpsychology, Behaviour, and Social Networking suggests a striking conclusion: when under hypnosis, the human brain behaves in ways that closely resemble the functioning of a large language model (LLM) such as ChatGPT. The finding challenges long-held assumptions about consciousness and offers important insights for building safer and more reliable artificial intelligence.
The paper, by this author, together with Prof. Brenda K. Wiederhold of the Virtual Reality Medical Centre in San Diego and Prof. Fabrizia Mantovani of the University of Milano-Bicocca, argues that hypnotised minds and LLMs share three core features. Both rely on automatic pattern-completion processes and operate without robust executive oversight, meaning they can generate sophisticated responses without genuinely understanding them.






