But the problems with current generations of generative AI have already started to show. Most significantly, their tendency to confidently make things up—or “hallucinate.” That is problematic enough in search, but in the law, the difference between success and failure can be serious, and costly.
Over email, Gabriel Pereyra, Harvey’s founder and CEO, says that the AI has a number of systems in place to prevent and detect hallucinations. “Our systems are finetuned for legal use cases on massive legal datasets which greatly reduces hallucinations compared to existing systems,” he says.
Even so, Harvey has gotten things wrong, says Wakeling—which is why Allen & Overy has a careful risk management program around the technology.
“We’ve got to provide the highest level of professional services,” Wakeling says. “We can’t have hallucinations contaminating legal advice.” Users who log in to Allen & Overy’s Harvey portal are confronted by a list of rules for using the tool. The most important, to Wakeling’s mind? “You must validate everything coming out of the system. You have to check everything.”
Wakeling has been particularly impressed with Harvey’s prowess at translation. It’s strong at mainstream law, but struggles in specific niches, where it’s more prone to hallucinations. “We know the limits, and people have been extremely well informed on the risk of hallucinations,” he says. “Within the firm, we’ve gone to great lengths with a big training program.”
Other lawyers who spoke to WIRED were cautiously optimistic about the use of AI in their practice.
“It is certainly very interesting and definitely indicative of some of the fantastic innovation that is taking place within the legal industry,” says Sian Ashton, client transformation partner at law firm TLT. “However, this is definitely a tool in its infancy and I wonder if it is really doing much more than providing precedent documents which are already available in the business or from subscription services.”
AI is likely to remain used for entry-level work, says Daniel Sereduick, a data protection lawyer based in Paris, France. “Legal document drafting can be a very labor-intensive task that AI seems to be able to grasp quite well. Contracts, policies, and other legal documents tend to be normative, so AI’s capabilities in gathering and synthesizing information can do a lot of heavy lifting.”
But, as Allen & Overy has found, the output from an AI platform is going to need careful review, he says. “Part of practicing law is about understanding your client’s particular circumstances, so the output will rarely be optimal.”
Sereduick says that while the outputs from legal AI will need careful monitoring, the inputs could be equally challenging to manage. “Data submitted into an AI may become part of the data model and/or training data, and this would very likely violate the confidentiality obligations to clients and individuals’ data protection and privacy rights,” he says.
This is particularly an issue in Europe, where the use of this kind of AI might violate the principles of the European Union’s General Data Protection Regulation (GDPR), which governs how much data about individuals can be collected and processed by companies.