Gumersalls News

Gumersalls News

The Legal Case That May Have Barred AI From The Courtroom

by | Mar 18, 2024 | News

Whether undertaking a routine conveyancing procedure or managing a complex contract dispute, clients are dependent on solicitors to provide their legal expertise and specialist knowledge.

That often involves intense research, contacting experts in particular fields and studying precedents, and people pay solicitors to ensure that their work is undertaken with confidence, sensitivity and the utmost accuracy.

This is, in part, why the use or alleged use of artificial intelligence in court cases has raised controversy, to the point that official judiciary advice was issued in mid-December 2023 that severely restricted the use of AI systems as part of the legal process.

Whilst it noted that it could potentially be useful for “administrative or repetitive tasks” the guidance as reported by the Financial Times discouraged its use on several grounds.

One of these was privacy concerns, with the guidance noting that anything inputted into an AI chatbot should be considered published, due to how large language model AI systems train themselves on previous inputs and results, potentially using private information for the purpose.

However, the more major and well-publicised issue is concerns about the efficacy of AI, and exactly how relevant and useful the results it generates truly are.

Inaccurate, Incomplete Or Misleading

The process a generative AI uses is complex, but from a user standpoint a person submits a request, and the AI tool attempts to answer it as best it can.

Unlike a search engine, or other reputable tool for searching vast swathes of information, AI chatbots and other generative AI tools will endeavour to provide the best answer possible, but due to how they are trained often lack the understanding of which information is reliable.

This has led to AI systems providing incomplete information, relying on legal precedent from another country or even putting information together to create effectively made-up citations, an issue known as an AI hallucination.

By far the most infamous example of this taking place so far was in the case of Mata v Avianca, a New York State personal injury case involving a metal drinks cart hitting the knee of the plaintiff.

After a lengthy, complicated process of whether the case should move forward, the legal team for Mr Mata submitted a brief for why the case should ultimately proceed, ostensibly prepared by Peter LoDuca but actually prepared by his colleague Steven Schwartz using ChatGPT to look for any similar cases.

It was soon found, and later confirmed by Mr Schwartz himself in an affidavit, that six of the cases cited in the documentation suggested by ChatGPT turned out not to exist.

This immediately caused confusion as both legal teams and the judge struggled to find any evidence of the six cases, ultimately concluding that they did not exist.

Mr Schwartz admitted that he was not aware that generative AI could fabricate case citations and assumed the cases existed in good faith, although this was not enough to escape a fine for himself, Mr Loduca and the legal firm Levidow, Levidow & Oberman. The case was also dismissed in the airline’s favour.

Whilst the result of this particular case is relatively minor, the international attention the case generated has had ramifications for the use of AI in courtrooms around the world, including the strongly worded guidance in the UK.