SULS Publications SULS Publications

Could an artificially intelligent device practise as a barrister in New South Wales?

Our legal system is the culmination of human scholarship, philosophy, and societal development. In the modern age, it’s natural to question whether the nuanced understanding of this system and the art of advocacy is something which can only be mastered by a human. As a surgeon’s hands are replaced with robotic arms, is it now the lawyer’s turn to be condensed, perfected, and replaced?

By Jack Mars and Riley Vaughan (USYD)

Our legal system is the culmination of human scholarship, philosophy, and societal development. In the modern age, it’s natural to question whether the nuanced understanding of this system and the art of advocacy is something which can only be mastered by a human. As a surgeon’s hands are replaced with robotic arms, is it now the lawyer’s turn to be condensed, perfected, and replaced?

The question has been raised before! Isaac Asimov tells us the story of Stephen Byerly in his collection of short stories, “I, Robot”. Byerly is a promising political candidate, a district attorney and, crucially, an alleged robot. Asimov never reveals the true anatomy of Byerly, and the questions which he asks his readers are still being wrestled with today. To ask whether an AI could practice as a barrister is to examine human insecurity and scepticism. 

The roles of the barrister might be conveyed simply: to prepare advice; to negotiate and arbitrate; and, to appear as an advocate for their client, beholden always to their overriding duty to the administration of justice. The first of these roles is shared by the solicitor. The second is shared with mediators. The third has always been the exclusive realm of the barrister. 

No matter how straightforward a client may seem, each word they utter, each act they complete, and each omission they neglect, lives in a grayscale void between right and wrong. The barrister’s role as an advocate is to colour each of those facts, both with their knowledge of the law and knowledge of humanity, to convince a judge or jury that their client deserves the benefits of justice. 

Most artificial intelligence relies on an algorithmic method known as “Deep Learning.” To escape semantic debates which would make a computer scientist blush and a lawyer cry out for their copy of Statutory Interpretation in Australia, we can say that Deep Learning is a type of AI which models its processes off a human brain. The problem is that these techniques are inherently, and entirely, data driven. There are researchers who believe that more data will yield more intelligent AIs—that “scale is all you need”. However, these algorithms struggle with nuance. It is extremely difficult to generate the nuanced data sets required for general learning. Imagine an AI being trained to caption images. Perhaps it could identify general objects and locations, but it cannot infer key relationships. Where a human might say, “that man is stealing her bag,” the computer will say, “there is a man, a woman, and a bag.”

Artificial intelligence as it exists today is not remotely close to replicating the role of an advocate. So, fear not! That barrister you suspected of being an unemotive android at the café outside Court was in fact just taking section 44 of the Legal Profession Uniform Conduct (Barristers) Rules 2015 far too far. 

But what about tomorrow? While there isn’t much hope of a General Artificial Intelligence appearing overnight, it would be naïve to think that it couldn’t exist one day. The imagination of scientists and engineers is stirred by the science-fiction of the past, and that stirring seems to inevitably result in progress. Without pretending to be able to give it a timeline, we posit that given enough time, an artificially intelligent advice will one day be able to understand the entirety of the law, the entirety of a set of facts, and human emotions. It follows that one day, an AI device could practise as a barrister.

But having dealt with whether, perhaps we should turn to what seems to be a bigger question: when the work of an advocate is intensely personal, why would we want barristers that aren’t human?

The obvious answer is that automation could improve access to justice: if everyone had cheap and easy access to the most knowledgeable barrister in the field of their issue, the world would be a more just place. If an AI device could perform succinct legal analyses, barristers could focus more on advocacy and less on paperwork.

But that automation will inevitably benefit the rich before the needy: while ‘BarristerBot’ is being improved upon, barristers who charge higher fees will be able to pay for and make use of it, improving their success against clients who can only afford barristers who charge less.

Consider a hypothetical. Far into the future, if two opposing clients were each to hire a perfect ‘BarristerBot’, would there be any need for an adversarial trial? They could simply negotiate the facts and decide upon the correct outcome, eliminating the role of the Court or assuming the role of a Judge. Perhaps the role of the barrister is one which not only thrives on greyscale and imperfections but utterly depends on them. 

There are still many valid questions, hypotheticals which only the aeons ahead of us might answer. Questions asked by Isaac Asimov, still being asked today, may go unanswered for millennia, or forever.

Suppose you are amid a long and onerous legal battle. The dispute is bitter, the fees enormous, and the consequences are grave. You have employed a ruthless and unsympathetic barrister. You have never seen them eat nor drink, rest nor pause, nor even break a sweat amidst the muggy crowds of Martin Place. They epitomise the cold and heartless lawyer—you’re delighted by how inhuman this barrister can be.

Suppose that your barrister is Asimov’s Stephen Byerly, accused of being an artificial creation. He never graduated law school, he never crammed over tomes, and he never found himself craving caffeine or suffering a clerkship. Would you really care?

Read More
SULS Publications SULS Publications

My deep-learned friend: stepping into the AI shadow

Introducing AI, the lightsaber to the legal world! It glows with many possibilities, yet promises many dangers if wielded improperly. AI is capable of assisting with legal research, evidence gathering and drafting submissions, but further development is required for it to operate autonomously as an advocate.

By Anson Lee and Janika Fernando (USYD)

Silence. The Federal Court of Australia is now in session. Please be seated.

“Yes, Smith for the Applicant.”

The barrister is cloaked in a white tunic and black robe. All eyes are on them. The Applicant? A father claiming for his rights to child support.

Advocacy. This is the heart of the barrister’s role, their duty to their client and ultimately, the Court.[1]

But let’s rewind and imagine an AI device in the barrister’s place. What would change?

“AI appearing for the Applicant, identifier C9M…”.

Introducing AI, the lightsaber to the legal world! It glows with many possibilities, yet promises many dangers if wielded improperly. AI is capable of assisting with legal research, evidence gathering and drafting submissions, but further development is required for it to operate autonomously as an advocate.

Journey through Trial

Any competent AI barrister must map facts of a case and solicitors’ instructions onto the relevant legal principles to frame its arguments. Broadly, AI systems rely on existing data to make decisions in novel scenarios.[2] Handwriting recognition systems are fed millions of handwritten letters to discover patterns, allowing them to convert our scribbles into print. However, the letter of the law in NSW, being an amalgam of common law and statute, is far more complex than the letter ‘a’.

As a starting point, the AI barrister could perform searches in a legal database or encyclopaedia to find existing cases with similar facts. It would then independently distil the relevant principles and generate submissions applying them to the specific case. This final step is the hurdle current AI must overcome; despite improving at a rapid rate, even the best text synthesis AIs appear amateurish beside a practiced mooter. More interdisciplinary collaboration is necessary to connect the niche realm of legal language with these approaches – which are built on a broader collection of text – to make our AI articulate.

The general virtues of AI lie in its efficiency. Human exhaustion and certain cognitive biases are escaped,[3] potentially reducing weeks of cursory research and evidence gathering to mere minutes using techniques like sentiment analysis.[4] However, these tools struggle in detecting subtle cues such as ambiguous or niche language, reducing the quality of analysis.

AI also risks inheriting our human biases in the form of ‘algorithmic bias’,[5] where socioeconomic inequalities in the training data are imported into the model.[6] Women, for example, represent just 11% of Senior Counsel in NSW,[7] meaning the rhetoric employed by men will have an outsized influence on the AI’s own. Conversely, by approaching model training with diversity in mind, we can make a conscious effort to combat these stereotypes by selecting more equitable samples.

The ethics of it all

According to the Uniform Laws,[8] a barrister owes a paramount duty to the court and the administering justice. However, an AI barrister may undermine respect for this duty because rules are difficult to enforce against a non-sentient actor. If an AI misbehaves, the fault is untraceable to an individual due to its organic development. This ambiguity is multiplied when an identical AI program is installed across multiple devices and users; must all copies be modified or destroyed because of a few bad apples?

The fact that AI barristers can be replicated at minimal cost suggests the resolution of certain access to justice problems for vulnerable community members. Conversely, low-cost internet services often monetise user data with third parties, and the data exchanged between clients and lawyers is often the most revealing kind, raising significant privacy concerns. Quite separately, current confidentiality standards are directly counterposed to the data-driven way in which AI learns; its skill to draw inferences from evidence stagnates if it cannot not use privileged data as its learning material. To protect clients’ interests consistently with the barrister’s ethical duties demands significant regulation of such AI legal services.

The proliferation of AI could also strip justice of an intrinsically human element. While AI offers efficient analysis of law and evidence, the human barrister has the ultimate and more reliable capacity to make judgement. Judgement includes a myriad of empathy, creativity and experience.[9] Imagine an immigration matter with a non-English speaking applicant. Yes, an AI device has its machine learning case law analysis demonstrating experience, but empathy is required for judgement. The appearance of justice is arguably as crucial as its execution.[10] The parties must feel included, whereby the barrister helps the applicant understand their submissions, thereby facilitating access to justice.

Taking a long view

For better or worse, human norms are responsible for what a ‘barrister’ looks like. While our intrepid AI barrister may eventually create a sea-change in this perception, it will initially compete alongside us. Humans display a well-observed scepticism towards imperfect imitations – the ‘uncanny valley’ effect.[11] Even amongst judges, Richard Susskind identifies a historically conditioned trust of traditional hearings over technologically-enabled ones that obstruct the way of AI-human parity for years to come.[12] Nevertheless, watershed moments have created seismic shifts in public perception before, like when IBM’s Deep Blue computer defeated Garry Kasparov in chess.[13] Equally, a unanimous High Court judgment in favour of an AI-staffed Appellant could breathe change into our baroque chambers.

Our hypothesis is that the AI barrister will begin its career in an assistive capacity, preparing draft submissions and combing through evidence. As the models for speech synthesis improve, it will develop a more autonomous practice, culminating in polished courtroom advocacy. It’s too early to tell whether or not the AI barrister is ‘to be or not to be’. The AI barrister is certainly more than the shadow it casts. One can see that the AI barrister is enshrined in its efficiencies, from wielding the light to performing legal research and case management faster than ever before! Not to mention the endless programming possibilities that hold the key to coding languages and recognising diverse characteristics. But given the complexity of coding laws, privacy and the need to preserve human advocacy, challenges lie for the AI to ‘play its many parts’ in its grand ‘stage’ of the Court.

Endnotes

[1] Hon. Marilyn Warren AC, ‘THE DUTY OWED TO THE COURT – SOMETIMES FORGOTTEN’ (Speech, Judicial Conference of Australia Colloquium, Melbourne, 9 October 2009).

[2] Steven Bozinovski, ‘Teaching space: A representation concept for adaptive pattern classification’ (1981) 81(28) COINS Technical Report.

[3] Michael Legg & Felicity Bell, ‘Artificial Intelligence and the Legal Profession: Becoming the AI-enhanced Lawyer’ (2019) 38 (2) University of Tasmania Law Review, 34-59.

[4] As I describe this to the embattled barrister sitting across from me, his eyes sparkle.

[5] Lisa Toohey, Monique Moore, Katelane Dart and Dan Toohey ‘Meeting the Access to Civil Justice Challenge: Digital Inclusion, Algorithmic Justice, and Human-Centred Design’ (2019) 19 Macquarie Law Journal 133,148.

[6] For an example of this in popular culture, see: Davey Alba, “It’s Your Fault Microsoft’s Teen AI Turned Into Such a Jerk”, Wired (Webpage, 2016).

[7] NSW Government Equitable Briefing Report, 2018-2019 Financial Year: NSW Government Equitable Briefing Policy for Women Barristers, (Report No 1, 2018-2019) 4.

[8] Legal Profession Uniform Conduct (Barristers) Rules 2015 (NSW), s 4.

[9] Legal Profession Uniform Conduct (Barristers) Rules 2015 (NSW), s 4.

[10] Rex v Sussex Justices [1924] 1 KB 256.

[11] Maya B. Marthur & David B. Reichling, ‘Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley,’ (2016) Vol. 146, ScienceDirect Journal, 22-32.

[12] Richard Susskind, ‘Online Courts and the Future of Justice’ (Oxford University Press, 2019) 206-7.

[13] ‘Game Over: Kasparov And The Machine’ (ThinkFilm, 2003).

Read More
SULS Publications SULS Publications

Could an Artificially Intelligent Device Practise as a Barrister in NSW?

Artificial Intelligence (‘AI’) has undoubtedly made great strides in the legal field over the past decade. Despite these technical breakthroughs, debate continues to surround the claim that AI can adequately replicate the various roles of counsel in Australia. A barrister’s skill does not lie solely in their mastery of crafting submissions and traversing law reports–instead, their role ranges anywhere from advocate to advisor to perhaps mediator on occasion. Additionally, NSW barristers must observe a set of statutorily defined prescriptions, which embody their independence, professionalism, impartiality, and overriding duty to the administration of justice. We argue that, at least presently, an AI device cannot receive a “call to the bar” because technological and cultural challenges prevent it from adequately embodying a barrister’s roles and values as advocates and advisors.

By Jarrod Li, Stephanie Tong, and Bob Chen (UNSW)

I INTRODUCTION

Artificial Intelligence (‘AI’) has undoubtedly made great strides in the legal field over the past decade. Despite these technical breakthroughs, debate continues to surround the claim that AI can adequately replicate the various roles of counsel in Australia. A barrister’s skill does not lie solely in their mastery of crafting submissions and traversing law reports–instead, their role ranges anywhere from advocate to advisor to perhaps mediator on occasion.[1] Additionally, NSW barristers must observe a set of statutorily defined prescriptions, which embody their independence, professionalism, impartiality, and overriding duty to the administration of justice.[2] We argue that, at least presently, an AI device cannot receive a “call to the bar” because technological and cultural challenges prevent it from adequately embodying a barrister’s roles and values as advocates and advisors.[3]

II ADVOCACY

From a technical perspective, a barrister’s role as an advocate can feasibly be modelled as a product of natural language processing (‘NLP’) techniques, with data sourced from transcripts of oral arguments, submissions, and extralegal literature. In essence, it would be possible for an AI to model the behaviour of an advocate after “training” or “learning” from data that is representative of the typical legal activity of a barrister.[4] However, the use of NLP in advocacy is problematic. This is because the quality of an AI system is heavily correlated with the quality of its training data. It necessarily follows that AI works best with structured and routine tasks where large amounts of high quality and consistent data is available. Accordingly, there has been demonstrated use of AI in the legal field in these areas–see, e.g., document review and discovery.[5] That said, advocacy is far less structured and routine, raising a number of issues.

First, the law is rapidly evolving in response to technological developments, as is clear from its frequent application to novel fields and modern industries. For example, consider advances in intellectual property,[6] cybersecurity,[7] and defamation,[8] as among the fore of this cursory discussion. It is practically impossible for an AI device to advocate in these areas, as pre-existing data is lacking.[9] A similar issue arises from the amount of unstructured and unpredictable human interaction that occurs during advocacy.[10] Accordingly, those who foresee AI as a replacement for counsel are sorely misled, given current AI models are unable to ingest data at the frequency and breadth demanded of both human barristers and legal institutions.[11]

Second, assuming a sufficiently large machine learning model were to exist, the quality of its output is largely dependent on the integrity of its training data. More precisely, ingesting data marked with human biases can instil discrimination into the AI system.[12] This is a well-recorded phenomenon, and often highlights otherwise unknown human biases–e.g., the Amazon recruitment AI trained on existing workplace data highlighted systemic gender bias within the company after continuously rejecting female applicants.[13] This problem is of particular concern within the law, given the well documented cognitive biases that exist in judicial decision-making and the legal profession at large.[14] Consider, for example, the well documented racial biases that exist in assessing witness credibility.[15]

III ADVISORY

Barristers also play a significant advisory role. Without oversimplifying, this role encompasses the ability to lead litigation with the best interests of their clients in mind.[16] One aspect of this role inevitably involves advising whether their clients have a reasonable prospect of success, for which AI has seen notable success in accurately predicting litigation outcomes. For example, “software such as Ravel Law and Lex Machina have collected and analysed massive amounts of data on judges and their decisions, producing data-driven statistical model[s] … that are often more accurate than human prediction.”.[17] However, this is only one aspect of a barrister’s advisory role.

Other aspects of advisory are structurally underpinned by the human experience. For example, a barrister’s ability to formulate legal opinions is key. However, these opinions involve determining what facts should be given weight, what details are relevant, and what opinions are credible, all of which are dependent on human instinct derived from their experience in the field. It is the wide-ranging variety of these perspectives that prompts courts to frequently acknowledge that “two reasonable minds may differ'' on the same issue.[18] Additionally, when providing advice, barristers must possess the capacity to establish close relationships with their clients. It has been readily demonstrated that ‘relational skills were preferred over legal skills’[19] by clients when seeking representation, and forming strong relationships is key in achieving better legal outcomes. Accordingly, it is clear that the advisory role of barristers unavoidably relies on human experiences that must be modelled in any “AI barrister” for it to adequately replace a human. This is problematic, as it is not yet technically feasible to accurately model the inherently complex and discursive aspects of human experience in AI. Simply put, attempts to model human emotions, biases, and perspectives suffer from the “curse of dimensionality”.[20] While AI can be reasonably trained on a single skill, each additional skill that needs to be modelled adds dimensions of complexity. As there are many skills required to model the human experience, this quickly becomes inefficient and impractical.[21]

III CONCLUSION

In conclusion, the current state of AI is unable to replicate the complexity of a barrister’s role in the NSW legal system. We therefore argue that AI devices cannot practice as a barrister in NSW. Although recent developments have seen AI continue to accelerate in speed and capability, many fundamental issues persist in preventing the systemic integration of AI into our legal system. We expect the next 5-10 years to be characterised by massive leaps in algorithmic and computational legal development, especially as collaboration continues to develop between the legal and computing disciplines. Moving forward, the evolution of an “AI barrister” will inevitably necessitate broad inquiries into the data, reasoning processes, biases, and prominence of higher level thinking skills underlying each developed solution.

Endnotes

[1] New South Wales Bar Association, ‘Using barristers’, What is a barrister? (Web Page, 14 September 2022).

[2] Legal Profession Uniform Conduct (Barristers) Rules 2015 (NSW) rr 3-4 (‘Barristers Rules’).

[3] For previous work on this issue, see, eg, Geoffrey Nettle, ‘Technology and the law’ (2017) 13 Judicial Review 185; Tania Sourdin, ‘Judge v Robot? Artificial Intelligence and Judicial Decision-Making’ (2018) 41(4) University of New South Wales Law Journal 1114.

[4] Michael Legg, ‘Artificial Intelligence and the Legal Profession: Becoming the AI-Enhanced Lawyer’ (2019) 38(2) University of Tasmania Law Review 34, 41.

[5] Dana Remus and Frank Levy, ‘Can Robots be Lawyers? Computers, Lawyers and the Practice of Law’ (2017) 30(3) Georgetown Journal of Legal Ethics 501, 504.

[6] Thaler v Commissioner of Patents [2021] FCA 879.

[7] Australian Securities and Investments Commission v RI Advice Group Pty Ltd (2022) 160 ACSR 204.

[8] Google LLC v Defteros [2022] HCA 27.

[9] Remus and Levy (n 5) 526.

[10] Ibid 514.

[11] Ted Goertzel, ‘The path to more general artificial intelligence’ (2014) 26(3) Journal of Experimental & Theoretical Artificial Intelligence 333, 351.

[12] Felicity Bell et al., ‘Ai Decision-Making and the Courts. A guide for Judges, Tribunal Members and Court Administrators’ (Report, 2022) 13.

[13] Ibid.

[14] See generally, Australian Law Reform Commission, Without Fear or Favour: Judicial Impartiality and the Law on Bias (Report No 138, 2022).

[15] See, eg, Andrew Elliot Carpenter, ‘Chambers v. Mississippi: The Hearsay Rule and Racial Evaluations of Credibility’ (2002) 8(1) Washington and Lee Journal of Civil Rights and Social Justice 15.

[16] Barristers Rules (n 2) r 35.

[17] See Xiao Liu et al., ‘Everything Has a Cause: Leveraging Causal Inference in Legal Text Analysis’ (Conference Paper, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, June 2021).

[18] For typical usage in a variety of contexts, see, eg, Betfair Pty Ltd v Racing New South Wales (2010) 189 FCR 356, [90]; Australian Broadcasting Corporation v SAWA Pty Ltd [2018] WASCA 29, [94]; R v Campbell [2005] VSCA 225, [45].

[19] Marcus T Boccaccini and Stanley L Brodsky, ‘Attorney-Client Trust among Convicted Criminal Defendants: Preliminary Examination of the Attorney-Client Trust Scale’ (2002) 20(1) Behavioral Sciences & the Law 69.

[20] See Francis Bach, ‘Breaking the Curse of Dimensionality with Convex Neural Networks’ (2017) 18(1) Journal of Machine Learning Research 1, 1-2.

[21] See generally, Effat Jalaeian Zaferani, Mohammad Teshnehlab and Mansour Vali, ‘Automatic Personality Traits Perception Using Asymmetric Auto-encoder’ (2021) 9 IEEE Access 68596.

Read More