Pete Hanlon, CTO of Moneypenny. Moneypenny handles outsourced phone calls, live chat and digital comms for thousands of companies globally.

Every company deploying an AI voice agent has made the same bet, whether they realize it or not. They’ve let the model design their customer conversation.

The goals are clear: collect the caller’s name, understand their enquiry and route them to the right team. The agent achieves all three. Metrics look healthy. But ask anyone in the organization to explain how the conversation actually unfolds—why the AI asks what it asks, in what order, with what tone and what happens when the caller says something unexpected—and nobody can tell you. Because nobody decided. The model did.

The Shortcut Most Companies Take

This isn’t a bug; it’s a shortcut. It's giving a large language model (LLM) a set of goals and letting it figure out how to get there. This is fast to build, easy to demo and avoids the genuinely hard work of designing the conversation. The LLM decides the order of questions, the phrasing and the recovery when something goes wrong. It improvises your customer experience in real time.