Britain’s big banks are using artificial intelligence to crack down on people trafficking, personalise customer investment choices and overhaul call centres.

The latest wave of generative AI models are allowing lenders to go beyond traditional machine learning techniques, which have long been used to identify potential cases of fraud and assess credit risks.

Santander has developed an AI model trained to identify suspicious patterns of behaviour in accounts which could point to instances of people trafficking. According to Jas Narang, Santander UK’s chief transformation, data and AI officer, banks have historically been slow to identify this sort of organised crime from customer data. “It has been a little bit hit and miss for all banks in the past,” he says. “And more importantly, it’s not always been timely — it has been analysis [coming after] the event by which time criminals have moved on.”

However last year, the bank built an AI tool which was trained to pick up on certain “tells” that could indicate people trafficking — such as money being deposited into the account from several different locations within a few minutes of each other.

Narang added the difference between traditional machine learning, which is used to analyse vast reams of data, and generative AI, is that the latter can make judgments in a more timely manner. “The difference between what was happening previously and now is the timeliness. It’s picking up stuff whilst criminal activity is being perpetuated. So you can literally pick it up on the day.” Since the rollout of the tool last year, the technology has allowed Santander to generate hundreds of leads indicating trafficking, which the lender then passed on to the authorities for further investigation.