There is no substitute for expert legal advice from a local solicitor, and this is true not only figuratively but very literally.
A recent legal case where a company attempted to use an AI tool to illegally avoid paying out an agreed-upon bonus has once again put the spotlight on the use of large language models in the legal profession and why it is always a terrible idea.
Its combination of claimed authority, polite deference and ease of use has caught several people out, from plaintiffs to legal professionals, who all try to take a shortcut through the legal process only to end up in serious legal trouble.
The best way to showcase why this is a bad idea is through notable examples, so here are some of the lessons we can learn about why using LLMs or other AI tools is a bad idea for any legal process you need.
Why Did A Company Trust AI Over Its Legal Team?
One of the dangers of LLMs such as ChatGPT is that they have a tendency to tell the user what they want to hear, even if it is palpably untrue and could lead to reckless and illegal behaviour.
A recent example of this is the case reported by The Guardian of Krafton, a South Korean publisher of computer games who acquired the developer of the game Subnautica, Unknown Worlds Entertainment.
After acquiring the studio in 2021, there were three important legal clauses Krafton attempted to break:
- The studio would remain independent.
- The original co-founders and CEO Ted Gill would retain control and could only be removed for cause (defined as a fair dismissal)
- The studio would receive a $250m (£188.5m) bonus if the game Subnautica 2 met certain sales targets.
After internal sales projections suggested that it would reach its sales targets and Krafton’s legal team suggested that there would be no way around this without a major legal challenge, Krafton CEO Changhan Kim chose to trust ChatGPT instead.
Ultimately, the court found that the company had acted improperly and ordered them to reinstate Mr Gill and pay out the bonus.
Trusting a tool that tells you what you want to hear, even if it defies legal reality, is extremely costly.
Why Should Solicitors Avoid Using AI Tools?
A large language model, the type of tool often described as an “AI chatbot”, is a tool that knows what a correct answer looks like, but does not have the capacity or understanding to provide accurate advice.
This combination is particularly dangerous in the legal world, and there have been multiple cases where solicitors have relied on information that is either irrelevant, incomplete or even completely made up.
An LLM is designed to give you an answer even if the information to create one does not actually exist.
There have been multiple cases of this reported by The Guardian, where a lawyer or legal professional has relied on an LLM which generated fake legal case citations that were included in court documents.
Often, this leads to the court having to investigate cases that do not exist; it can lead to a case being thrown out, and it often leads to disciplinary proceedings.
Ultimately, no self-respecting legal professional should ever use AI tools, but it is important to avoid doing the same for filling out reports or preliminary forms, at least not without checking its work.