Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now
When my wife recently brought up AI in a masterclass for coaches, she did not expect silence. One executive coach eventually responded that he found AI to be an excellent thought partner when working with clients. Another coach suggested that it would be helpful to be familiar with the Chinese Room analogy, arguing that no matter how sophisticated a machine becomes, it cannot understand or coach the way humans do. And that was it. The conversation moved on.
The Chinese Room is a philosophical thought experiment devised by John Searle in 1980 to challenge the idea that a machine can truly “understand” or possess consciousness simply because it behaves as if it does. Today’s leading chatbots are almost certainly not conscious in the way that humans are, but they often behave as if they are. By citing the experiment in this context, the coach was dismissing the value of these chatbots, suggesting that they could not perform or even assist in useful executive coaching.
FBI Veterans on AI Cyber Threats & Future Defenders
It was a small moment, but the story seemed poignant. Why did the discussion stall? What lay beneath the surface of that philosophical objection? Was it discomfort, skepticism or something more foundational?







