“The initial question is, if you use thirdparty data or images to train the AI, is that an IP infringement? A second question is whether there’s an infringement at the point of use, when we rely on the AI and the things it was trained on, to produce a result.”
There’s also the risk of ‘hallucinations’: wrong answers that look like right ones. Francesca Bennetts, ICM partner and a member of our Markets Innovation Group (MIG), says: “We liken it to an articulate, knowledgeable 13 year old who is capable of giving a convincing and well-constructed answer, but they don’t know what they don’t know.
“That’s probably the biggest risk from a legal perspective, because if people rely on the outputs of these systems without rigorous checking, they could give materially incorrect answers to clients, with potentially serious repercussions.”
A bigger question is who is responsible if something goes wrong. Karishma says AI liability may not be top of the legislative agenda right now, but soon will be. “We’re already looking at the question of who should be liable for the output created by the AI system – is it the person who created it, the person who procured it or the person who used it? Where does (and should) the buck stop?”
How to manage those risks
Before you can begin to build a responsible AI framework, you need to define your use case. This enables you to take a by-design approach. Daren explains: “It starts with understanding your organization – its needs and its goals – and then understanding the various use cases that would make work easier or more enjoyable. Technology should be used to create efficiency.”
Knowledge of the technical architecture is critical too.
“Before you let your people use the technology, you need to know where the data they input into a tool is going and who’s seeing the input and the output,” he adds. This will determine whether you design or license AI systems – and whether you limit access.
You need to establish the principles that will govern your use of AI and tailor them to the organization’s culture. Develop a risk management framework, but make sure your policies are practical and realistic. This means engaging with employees early so they understand the strategy, the risks, and the rules of use.
Buy-in from senior management and other relevant stakeholders is also essential if your AI governance measures are to have teeth, as is representation.
“We’re a diverse bunch of people,” says Karishma, “which means AI, and AI governance frameworks, should be created with that diversity in mind. Making sure that the right people are involved and understand their responsibilities will help make your adoption of AI a responsible one.”
Deploying Harvey: how we did it and what we learned
Our MIG team was responsible for rolling out Harvey, a generative AI system based on OpenAI’s large language model. Today, more than 3,500 employees across 43 jurisdictions have access to it from their desktops, with around 800 people using it daily. IP partner Peter Van Dyck says there are myriad examples of how Harvey has already changed the way he and others in the team now work.
“For example,” says Peter, “I used it to research international case law as part of patent litigation work. Harvey came up with several relevant and promising cases, which I was then able to send to our colleagues in the relevant jurisdictions for further analysis.”
Referring to deployment, Francesca adds: “The biggest hurdle was making sure we understood the key legal and regulatory risks. We actively managed those before we rolled it out.”
We also set up layers of governance, including an AI steering group to set the strategy, and a group for early adopters.
“This AI Brains Trust are not just champions,” says Francesca. “They identify use cases for their practice group, best practices and what doesn’t work well. We share those learnings with the wider firm so that everyone has the benefit of up-to-date thinking.”
The rules of use are also updated regularly to reflect any changes to regulations or our internal position on risk, but there’s one rule that remains constant.
“You have to validate the output,” says Francesca. “The outputs are meant to be used as inspiration, not verbatim, and we’ve made that crystal clear. It’s your responsibility to make sure that what you’re producing for your clients is accurate and fit for purpose.”
Impact on junior lawyers
Francesca is also focused on how AI impacts our people and making sure that the technology doesn’t disrupt their career plans and lives. She has been working with HR and training teams to understand how AI will affect our junior lawyers.
“There’s no doubt that AI makes some of the processes that our juniors do more efficient. We have to identify the skills we want people to learn, and if we think they are not going to get that experience organically, then we have to proactively teach them.”
In this respect, AI is allowing us to become more purposeful about our training for junior lawyers.
“I actually think that’s a good thing for our lawyers because it’s more systematic,” she adds. “It will mean we have that certainty that we’re teaching our people what they need to be an effective lawyer.”