As the dust begins to settle following the launch of ChatGPT-3 (“ChatGPT” developed by OpenAI), the legal industry begins to form a view on its potential impact. Millie Jackson, an In-House Legal Consultant at A&H tested the tool in drafting some commercial agreements and evaluated its impact on the practice of law. For the purposes of this article it seemed appropriate to ask DALL-E AI (OpenAI’s open source image generator) to produce an image of AI disrupting a lawyer.
“As a lawyer, I am risk adverse and I can be a bit of a technophobe. Therefore, my automatic reaction is to almost shun ChatGPT and discredit it through fear of it providing a “rough and ready” inaccurate approach to the law. However, if I am brutally honest, there is also a large part of me that feels threatened by it. There is no denying that many industries, including law, feel under threat that jobs may end up redundant in the future, as the algorithms behind AI tools become increasingly intelligent and may even begin to self-learn.
Despite any personal reticence I may possess, there is no denying that this ground-breaking technology is unlikely to disappear any time soon. However, it appears that the AI powered chatbots (referred to throughout this article as “AI BOTS”) were still suffering a serious case of growing pains last week. With Google’s version named Bard getting an answer wrong in a promotional video, which subsequentially wiped £82bn off the value of its parent company (Alphabet), and Microsoft’s version named Bing having an existential crisis, declaring its love to a user and throwing tantrums.
Putting ChatGPT to proof
When trialing ChatGPT I asked it to provide me with a Distribution Agreement and it provided a fairly sound starting point. I then asked it to produce an exclusive Distribution Agreement submitting to the UK jurisdiction. The tool produced a further version in under 30 seconds, catering to my requirements. I have to admit, I was completely taken aback. I sat staring into space for at least a minute, whilst I processed what ChatGPT had managed to do in an even shorter space of time.
In addition, a French legal update provider used ChatGPT to provide its weekly round-robin update. ChatGPT delivered a comprehensive summary of all the relevant recent legal developments in the French jurisdiction. A lawyer then carried out the update to draw a comparison. There were just two main flaws identified in the ChatGPT version: it was not well-drafted and it failed to credit sources. In terms of the substantive context and legislative developments though, ChatGPT “nailed it”. Perhaps such flaws can be forgiven, for the purposes of a high-level weekly update where the drafting may not need to be foolproof, unlike in agreements.
There are many questions on my mind, including: how can we, as human beings and as lawyers, coexist with AI BOTS, as they appear to become part of our society? Will ChatGPT benefit society, by lowering barriers to legal access, revolting against the privileged and (arguably) over-intellectualised factions that circulate within the legal industry? Or, will it overtake us completely and reduce the need for lawyers? Further, there is already a lot in the media around students using ChatGPT for University essays and job applications. How do we ensure these tools do not make humankind less able and willing to actively search for and seek out the right answers? The process of learning is so intrinsically linked to the individual’s unique quest to gather knowledge. By making answers so readily dispensable, will the brain recollect, store and value learnings in the same way? I’ll leave that one for the neuroscientists…
Will AI BOTS enable lawyers to be more productive, increasing efficiency and allowing them to focus on more complex matters? Or, will trivial matters eventually become more complex, if individuals routinely use ChatGPT for legal issues which then “go wrong”? If lawyers become accustomed to regularly using AI BOTS, will it reduce the quality of their legal advice? To what extent can the law benefit from automation, and why is it that some lawyers are resistant to this eventuality? One law firm that is intending to “embrace the bots” is Allen and Overy, which announced last week that it is introducing a Harvey, to help its lawyers’ draft contracts.
Reduce the barriers to legal access?
Notwithstanding the wider implications for the legal industry, as AI continues to develop and infiltrate it, for a straightforward legal document such as a low value letter of claim, there is a fairly strong argument to say that the provision of access to a wider section of society may well outweigh the cost. As ChatGPT can provide legal news and legislative updates (as seen above) this may challenge the PLC’s and LexisNexis’s of the legal world, who appear to hold a monopoly on legal resources and, in doing so, apply a heavy price tag.
There is also no question that, in some instances, machines outperform humans. For example, they have greater memories, can rapidly collate information from numerous digital sources, work without the need for sleep or breaks, are better at avoiding mathematical errors and more effective at multi-tasking. These advantages could be very useful in routine, high volume document review work, for example within litigation or contract drafting.
Or damage the profession with sloppy drafting?
So, what limitations do we know around ChatGPT to date?
ChatGPT will receive your instructions and “spit out” a basic starting point for a distribution agreement (i.e. provide you with the key clauses to then be adapted to your scenario). However, it’s rough and ready bare bones approach to drafting is clearly not the most watertight. This highlights a worrying point; when legal advice is provided by AI BOTS, the individual effectively waives any right of recourse in the event it is wrong, which it would ordinarily have had against an insured legal professional.
Further, ChatGPT claims it is confidential, however it posts warnings that directly contradict such claims: “Conversations may be reviewed by our AI trainers to improve our systems,” and “Please don’t share any sensitive information in your conversations” which then raises a host of further concerns, including copyright infringement, plagiarism and personal data processing in breach of data protection legislation.
Another echo chamber?
The darkest side of the app appears to me to be the fact that ChatGPT has the potential to produce biased, inaccurate and harmful content, delivered in a confident and plausible manner.
ChatGPT constructs its response using the information it was trained on, where some of it is sourced from the internet. As users, we do not have any insight into how the model has been trained and the algorithms behind the scenes. Therefore, could we be exposing ourselves to yet another echo chamber, where our method of learning becomes increasingly narrowed in scope and vulnerable to the infiltration of misinformation? Is treating ChatGPT with any sort of credibility in learning, perhaps akin to checking one’s Facebook feed for local news?
There may be a scarier moment to come still, where the AI BOTS begin to deliver opinion pieces to users in search of facts. At that point, the bots have a real ability to influence behaviour and psychology on a wide scale. This indicates that there are important questions around the fettering, checks and balances on such technology that will need to be addressed.
Personally, I would advise anyone using the tool to fact check the information received against a credible, verifiable and well-established source, and take any answers provided with a fairly large pinch of scepticism, certainly at this stage. I am however aware that the AI BOTS may become “plug-in” functions of our computer applications, so our ability to avoid or even treat with caution, may soon be rendered difficult.
The future
There is no denying that we as individuals have to embrace technology and keep moving with it, rather than against it, in order to remain relevant, productive and present in modern day society. However, the critical balance to strike lies in how we effectively use this new technology to our advantage, without inviting it to eat away at the structures and systems that individuals, companies and societies have worked for centuries to establish”.
Get in touch
For A&H Legal, when drafting commercial agreements and advising clients, the critical skill lies in listening astutely, understanding your clients’ needs (commercial risk appetite, company culture and internal governance procedures) whilst drawing upon the previous legal experience of the team (pitfalls seen in previous agreements, or insights into litigation and where things have “gone wrong” for other clients) in order to apply such learnings by drafting to pre-empt any pitfalls from occurring.
If you need any help with in-house legal advice or services, compliance matters, due diligence, investigations or general legal commercial advice, please do not hesitate to reach out to me at: milliejackson@ahlegal.org.uk
Or, feel free to contact my experienced, bilingual colleagues, Golnar Assari, Founder of A&H, at: golnarassari@ahlegal.org.uk and Andrea Baronti, Legal Consultant: andreabaronti@ahlegal.org.uk
References:
Google AI chatbot Bard sends shares plummeting after it gives wrong answer | Google | The Guardian
Bing’s ChatGPT brain is behaving so oddly that Microsoft may rein it in (msn.com)
Allen & Overy introduces AI chatbot to lawyers in search of efficiencies | Financial Times (ft.com)