In-house legal departments with an innovation habit have been adopting generative artificial intelligence since they realised its potential to improve the speed, quality and cost of legal work a couple of years ago.

Indeed, many legal tasks are particularly well suited for a new wave of automation, from reviewing contracts to supporting litigation.

So far the early adopters have reported incremental changes rather than dramatic transformation. But they are experimenting and expanding uses for the tech, while also figuring out questions such as how to keep human wisdom in the mix, build in safeguards and train staff in a new range of skills.

Sabastian Niles is chief legal officer at global business software company Salesforce, where his team is working on using “AI agents” that can make decisions, take action or complete multi-step tasks with less human involvement. One aim, he says, is to “define and shape what it means to have human and agentic AI teams working together”.

As befits a provider of business software, his team is both involved in the legal aspects of new products developed for customers and also using the tech itself. Working with Salesforce product and engineering teams, the in-house lawyers have helped to develop legal and ethical safeguards — so-called guardrails — which are built into its autonomous AI software. Their work includes helping to develop controls that put limits on what the autonomous AI can do, and mechanisms that help the agents improve themselves to become more reliable.