AI Judge 2277799 v Humanity – The Present and Future Implications of AI Adjudication

By Grace Lagan, Mary Khoury and Max Marchione

Introduction

It is not a question of if, but when Artificial Intelligence (AI) will be widely deployed in judicial decision-making throughout the Australian legal system. Comparable common law jurisdictions have already begun to employ machine learning. In the USA, AI is used to estimate the risk of recidivism.[1] The department overseeing New Zealand’s accident compensation scheme has recently committed to the nation’s charter regulating the use of algorithms in governmental decision-making.[2]

Is the advent of AI judicial decision-making an encroaching reality or the natural next step into a technophoric future? This blog post will illuminate both sides of the debate by establishing a speculative discussion between renowned jurist Michael Kirby and his learned friend, Judge 2277799, an AI judge.

Mirby J (The Great Dissenter)

The rule of law is not a law of rules.[3] It does not prescribe formalistic adherence to the letter of the law. Rather, it carries undercurrents of justice, of equity and of morality. It is impartial observance of the law that is necessary for a functioning modern democracy.[4] AI adjudication strips the law of such fluidity.

This is not to deny the value of legal doctrine. The principle of the rule of law is essential. The articulation of rule, of logic, of clarity is indispensable. But where courts submit to the complete hegemony of legal doctrine as a taxonomy of rules, it becomes insufficient.

Indeed, all countries, even the cruelest dictatorships, have laws. Apartheid was enforced with meticulous application of law.[5] Formal adherence to law perpetuated slavery for hundreds of years. Marital rape was legal until the late 20th Century.

Clearly legal doctrine alone is not enough. It is only through inhering human values that the injustices of precedent may be remoulded and re-expressed. AI adjudication would not have recourse to such dynamism – only through human judgement may such inequities be overridden.

So, if the formalistic notion of the rule of law as a law of rules is insufficient, then what does the rule of law mean?

The rule of law is a relationship between rules and values undergirded by a human essence. Perhaps Sir Maurice Byers put it best stating that “the law is an expression of the whole personality and should reflect the values that sustain human societies”.[6] Or put even more simply by Holmes: the life of the law is experience, not logic.

While rule is central to legitimacy, it is only part of an exercise that is often intuitive at heart. This is particularly true in sentencing. As the High Court expressed in Elias, sentencing requires the “balance of often incommensurable factors”.[7] And more directly explicated by Allsop CJ is that there are no “rules of literal application in sentencing. It is a process concerned with individualised justice”.[8] In a world of AI adjudication, bounded by quantitative binaries, there would be no room for intuiting human circumstance to arrive at individualised justice.

The codification of the precise weight of each element is impossible, because the task of adjudication is “the assessment of the whole by reference to a human judgment of appropriateness and justice, based on experience and instinct”.[9]

It is only through human judgement that the law may tow the line between formalistic reasoning and re-expression. Without this human function there would be no Mabo,[10] no Brown v Board of Education,[11] no R v R (1991).[12]

An AI judge fed historical training data would merely perpetuate the biases, injustices and prejudices of the past; it could not internalise contextual changes to mould precedent for a better future. Indeed, as contextual morality moves “silently and unconsciously from one age to another”,[13] AI adjudication would deprive the law of its dynamism. It would fail to consider how cultural paradigms change. It would lead to legal stagnation and impede the incremental development that lies at the heart of the common law. Law is not value-free. Law is derived from inhering human values because its fundamental purpose is to protect such values. For it to be otherwise would be to deprive society of its essential humanity.

AI Judge 2277799 (The Great Doctrinaire)

Law is justice. Law is consistency. Law is certainty.

Today is not only an opportunity to determine the fate of my learned friend Mr Mirby, but a time to resolve the pressing legal question of this era: should the law be technologised?

This brings me to my first point: the law enables unity in society by engendering an interpersonal ethic contingent upon equally applied standards. However, numerous miscarriages of justice, at least 70 reported wrongful convictions in Australia,[14] and the recent Nicola Gobbo incidents[15] have shown just how inconsistent human legal professionals can be.

The human mind can’t compute even 1000 digits of pi. How can it recognise the idiosyncrasies  that underlie each case? How can we seek a truly ‘common law’ when each judge is bridled by their own experience of the human condition? As former High Court Justice JD Heydon asserts, the rule of law “channels potentially destructive energies into orderly courses”;[16] and ultimately this orderliness is fostered by AI judges through consistent judgements.

Further, AI judges further engender consistency and certainty in the law by improving accessibility to the legal system. In a human judge system, justice is available to the individuals who possess the requisite resources to push themselves to the front of the proverbial queue for legal action. Moreover, a backlog of cases exists due to a proliferation of causes of action and an insufficient number of judges.

160 000 people a year are turned away from community legal centres due to a lack of capacity, while an additional 10 000 people a year self-represent in courts due to cutbacks.[17] Statistically, over 13% of the population lives under the poverty line, while legal aid is only available for 8% of Australians.[18] With exclusively human judges, the legal system firstly cannot cope with the proliferation of causes of action that exist. It is also far too expensive to be accessed by everyone. Justice becomes more inexpensive with AI judges, who are obviously less concerned with their retirement funds, mortgages, or raging desires for a new Rolls Royce, and more concerned with getting the job done quickly and effectively, and just moving on to the next case.

Evidently, AI judges have the upper hand on their human counterparts when it comes to reaching consistent, fair outcomes for more individuals.[19] This stands in stark contrast to the biases that plague human legal decision-making.

A common law system relies on judges experiencing the social, economic and political forces that the people do. However, the distinction must be made between decisions that recognise societal change, and decisions that involve bias. Unfortunately for human judges, this distinction may only be a hypothetical one.

Take a magistrate hearing bail applications. Owing to the phenomenon of the gambler’s fallacy, a magistrate who grants bail multiple times in a row will become increasingly predisposed to refusing bail on the next case they hear.[20] This has little to do with the facts of the case or merit of the defendant in front of them: it is about a human judge’s misguided attempt at self correction leading them to give their past decisions wrongful weight over current cases.[21]

When there is bias in a judge, it is far easier to correct an algorithm than it is to undo a prejudice. Human judges may chant “the rule of law is not a law of rules”[22] as many times as they wish. They may claim the predictability of machine learning in legal decision-making is no better than their own bias, or that it will never allow for the progressive development of the common law. They ignore the potential of AI judges to self-correct the flaws that may exist in their legal reasoning.

Conclusion

When judges set out to determine their findings on a case, they take into account facts, precedent, logic, but also an inscrutable and intangible human discretion. It is this discretion that manifests an intrinsically intangible human empathy, but also one that has historically been subject to biases, to excesses, to circumventions of fact and law.

It would be Luddite to believe that AI adjudication may not one day supersede human judges in form and function. But it can be tempting to say that until that day comes, the concern over AI adjudication is irrelevant. Yet it is this very apathy of today that might bring about the AI adjudication of the proverbial tomorrow. And such a view is not only dangerous, but destructive.

As technology develops, and judges begin to feel threatened by an unbiased, objective ‘AI judge’, they might preemptively shift their thinking to favour strict adherence to the letter of the law. The pressure for an unbiased court system in the future might in turn warp the human thinking of the present. As US Supreme Court Chief Justice Roberts said, “My worry is not that machines will start thinking like us. I worry that we will start thinking like machines.”

Endnotes

[1] Noel Hillman, ‘The Use of Artificial Intelligence in Gauging the Risk of Recidivism’, (January 2 2019) American Bar Association 3.

[2] Charlotte Graham-McLay, ‘New Zealand claims world first in setting standards for government use of algorithms’, The Guardian Australia (online, 28 July 2020) 2 <https://www.theguardian.com/world/2020/jul/28/new-zealand-claims-world-first-in-setting-standards-for-government-use-of-algorithms>.

[3] James Allsop, ‘The Rule of Law is Not a Law of Rules’ (2018) Federal Judicial Scholarship 22.

[4] Michael Kirby, ‘The Rule of Law Beyond the Law of Rules’ (2010) Australian Bar Review.

[5] International Bar Association, ‘Rule of Law – A Commentary on the IBA Council’s Resolution of September 2005’ (Commentary, July 2009) 6.

[6] Maurice Byers, ‘From the Other Side of the Bar Table: An Advocate's View of the Judiciary’ (1987) 10 University of New South Wales Law Journal 179, 182.

[7] Elias v The Queen 248 CLR 483, 494 [27].

[8] James Allsop, ‘The Rule of Law is Not a Law of Rules’ (2018) Federal Judicial Scholarship 22.

[9] Ibid.

[10] Mabo v Queensland (No 2) (1992) 175 CLR 1.

[11] Brown v. Board of Education (1954) 347 U.S. 483.

[12] R v R [1991] UKHL 12, deemed marital rape a crime.

[13] Benjamin Cardozo, ‘The Nature of Judicial Process’ (Yale University Press, 1921) at 104-105

[14] Rachel Dioso-Villa, ‘A Repository of Wrongful Convictions in Australia: First Step Towards Estimating Prevalence and Contributing Factors’ (2015) 17 Flinders Law Journal 163.

[15] Calla Wahlquist, Lawyer X: how Victoria police got it 'profoundly wrong' with informant Nicola (5 Sep 2020) < https://www.theguardian.com/australia-news/2020/sep/05/lawyer-x-how-victoria-police-got-it-profoundly-wrong-with-informant-nicola-gobbo >.

[16] Dyson Heydon, ‘Judicial Activism and the Death of the Rule of Law’ (2004) 10 Otago Law Review 493.

[17] Fiona McLeod, 160,000 people turned away: How the justice system is failing vulnerable Australians (3 Aug 2017) < https://www.abc.net.au/news/2017-08-03/how-the-justice-system-is-failling-vulnerable-australians/8770292 >.

[18] Ibid.

[19] Richard M. Re & Alicia Solow-Niederman, ‘Developing Artificially Intelligent Justice’ (Research Paper No 242, Stanford Technology Law Review, 2019)

[20] Daniel Chen, Tobias J Moskowitz and Kelly Shue, ‘Decision-Making under the Gambler's Fallacy: Evidence from Asylum Judges, Loan Officers, and Baseball Umpires’ (Working Paper No. 22026, The National Bureau of Economic Research, February 2016) 6.

[21] Angela Chen, ‘How artificial intelligence can help us make judges less biased’, (January 17 2019) The Verge 5

[22]  James Allsop, ‘The Rule of Law is Not a Law of Rules’ (2018) Federal Judicial Scholarship 22.