Teisė ISSN 1392-1274 eISSN 2424-6050

2024, Vol. 130, pp. 153–163 DOI: https://doi.org/10.15388/Teise.2024.130.13

Towards Automated Decision-Making at Court: The Use of Artificial Intelligence for Drafting and Rendering Court Decisions

Inesa Stolper
PhD candidate in Law
Mykolas Romeris University
Ateities g. 20, LT-08303 Vilnius, Lithuania
Phone: +370 5 271 4625
E-mail: inessastolper@yahoo.com

Towards Automated Decision-Making at Court: The Use of Artificial Intelligence for Drafting and Rendering Court Decisions

Inesa Stolper
(Mykolas Romeris University (Lithuania))

Summary. The integration of Artificial Intelligence (AI) into the judicial system presents both opportunities and challenges. AI can expedite proceedings, reduce costs, and broaden access to justice by serving as a decision-making assistant or an autonomous decision-maker. The article is structured into three main parts: an overview of AI technologies and their classification, a detailed examination of AI‘s role as an assistant in judicial decision-making, and a consideration of AI as an autonomous decision-maker.
The analysis revealed that while AI can significantly assist in legal proceedings by offering preliminary judgments or legal advice, its capacity as an autonomous decision-maker is complex.
A robust legal foundation respecting procedural norms and Article 6 of the European Convention on Human Rights (ECHR) is crucial. This legal framework should define AI‘s operational boundaries within the judiciary to prevent infringement on the right to a fair trial. Moreover, in line with Article 22 of the General Data Protection Regulation (GDPR), there must be opportunities for human intervention and the ability to contest AI-generated decisions, safeguarding a human-centric approach to justice.
The issue of bias in AI, reflecting pre-existing prejudices in training data, underscores the importance of careful programming, dataset selection, and ongoing oversight to avoid perpetuating discriminatory practices. AI‘s potential in simulating legal reasoning in straightforward cases suggests a cautious yet optimistic engagement with technology, advocating for its selective application in scenarios where public hearings are unnecessary.
The paper concludes that while AI presents a promising tool for enhancing judicial processes, its use must be approached with caution. It advocates for a balanced, multi-faceted approach to AI integration, emphasizing ongoing evaluation, legal regulation, and the selective application of AI technologies.
Keywords: automation, judicial system, right to a fair trial.

Automatizuotas sprendimų priėmimas teisme: dirbtinio intelekto naudojimas rengiant ir priimant teismo sprendimus

Inesa Stolper
(Mykolo Romerio universitetas (Lietuva))

Dirbtinio intelekto (DI) integravimas į teismų sistemą teikia ir galimybių, ir iššūkių. Dirbtinis intelektas žada pagreitinti teismo procesą, sumažinti išlaidas ir išplėsti galimybes kreiptis į teismą, nes jis gali būti sprendimų priėmimo asistentas arba savarankiškas sprendimų priėmėjas. Straipsnį sudaro trys pagrindinės dalys: dirbtinio intelekto technologijų apžvalga ir jų klasifikacija, išsamus dirbtinio intelekto kaip pagalbininko vaidmens priimant teisminius sprendimus nagrinėjimas ir dirbtinio intelekto kaip savarankiško sprendimų priėmėjo aptarimas.
Analizė atskleidžia, kad nors DI gali reikšmingai padėti teisminiuose procesuose siūlydamas preliminarius sprendimus ar teisines konsultacijas, jo kaip savarankiško sprendimų priėmėjo galimybės yra sudėtingos. Labai svarbu sukurti tvirtą teisin
į pagrindą, kad būtų laikomasi procesinių normų ir Europos žmogaus teisių konvencijos (EŽTK) 6 straipsnio. Šis teisinis pagrindas turėtų apibrėžti DI veiklos ribas teismų sistemoje, kad būtų išvengta teisės į teisingą bylos nagrinėjimą pažeidimų. Be to, pagal Bendrojo duomenų apsaugos reglamento (BDAR) 22 straipsnį turi būti numatytos žmogaus įsikišimo galimybės ir galimybė užginčyti dirbtinio intelekto sukurtus sprendimus, užtikrinant į žmogų orientuotą požiūrį į teisingumą.
Dėl dirbtinio intelekto šališkumo, kuris atspindi išankstinį nusistatymą mokymo duomenyse, svarbu kruopščiai programuoti, atrinkti duomenų rinkinius ir nuolat prižiūrėti, kad būtų išvengta įtvirtinti diskriminacinę praktiką. Dėl dirbtinio intelekto potencialo imituojant teisinius samprotavimus nesudėtingose bylose siūloma atsargiai, tačiau optimistiškai vertinti technologiją, pasisakant už selektyvų jos taikymą scenarijuose, kai vieši svarstymai nereikalingi.
Straipsnyje daroma išvada, kad nors dirbtinis intelektas yra daug žadanti priemonė teisminiams procesams tobulinti, jį naudoti reikia atsargiai. Jame pasisakoma už suderintą, įvairiapusį požiūrį į dirbtinio intelekto integravimą, pabrėžiant nuolatinį vertinimą, teisinį reguliavimą ir selektyvų dirbtinio intelekto technologijų taikymą.
Pagrindiniai žodžiai: automatizavimas, teismų sistema, teisė į teisingą teismą.

_______

Received: 14/03/2024. Accepted: 19/03/2024
Copyright © 2024 Inesa Stolper. Published by
Vilnius University Press
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Introduction

In today’s discussions, ChatGPT (Generative Pre-trained Transformer) has taken center stage, sparking debates about how Artificial Intelligence (hereinafter “AI”) is used in different areas. The legal field is also part of these discussions. ChatGPT was launched on 30 November 2022, and in January 2023 a notable instance occurred when the Colombian judge Juan Manuel Padilla Garcia reportedly used ChatGPT to help make a legal decision. The AI tool was used to create a legal argument for a health insurance case, which was then included in the final judgment along with a mention of AI’s involvement (Daily Mail, 2023).

ChatGPT is a generative artificial intelligence that uses deep learning techniques to generate output. It is trained on a massive amount of text data from the Internet (Hassani et al., 2023). ChatGPT is not a single technology of this type. It is estimated that hundreds or even thousands of new AI tools emerge every day (Kahn, 2023).

There are many advantages of using generative AI technologies at the court. They may expedite proceedings, reduce delays and costs, and contribute to access to justice. However, the use of AI also has associated challenges such as bias, inaccuracies, privacy, and cybersecurity concerns (Hassani et al., 2023).

This paper aims to explore the question of automation of the decision-making process at the court given the specificities of new AI systems and its implications for fundamental human rights, particularly some elements of the right to a fair trial. The paper is divided into three parts. The first part briefly outlines what is AI and how it could be used in the court; whereas parts two and three examine the use of AI in the court in two capacities: the second part explores the implications of using AI as an assistant to the decision-making at the court and the third part focuses on the prospects of using AI as an autonomous decision-maker.

1. Artificial Intelligence and Administration of Justice

To start an examination of how artificial intelligence could be integrated into court processes, a preliminary explanation of AI itself and distinction among various types of AI is needed. While there are numerous definitions of AI in academic literature (Cofone, 2021; Zalnieriute et al., 2021) and ethical guidelines (UNESCO Recommendation, 2022; Ethics Guidelines for Trustworthy AI, 2019), for the purposes of this paper, the EU Proposal on Regulation of AI (hereinafter “Artificial Intelligence Act”) is used to define and to classify AI systems.1

AI is broadly defined in Art. 3 par. 1 of the Artificial Intelligence Act as a software, which – for a given human-defined objective – can generate various outputs such as content, predictions, recommendations, or decisions on a given set of human-defined objectives and “influence the environment they interact with” (Artificial Intelligence Act, 2021, Art. 3 par 1(1)). A similar definition is envisaged in the Council of Europe Draft Framework Convention on Artificial Intelligence providing that such algorithmic system assists or replaces human decision-making (Framework Convention on AI, 2023, Art.3).

Annex I of the Artificial Intelligence Act expands the definition by listing three categories of systems that, according to EU legislators, could be classified as AI: a) machine learning systems; b) logic and knowledge-based systems; c) systems that use statistical approaches, Bayesian estimation, and search and optimization methods. Third-category systems, which are based on statistical approaches, cover a wide range of different technologies, mostly legal risk assessment, data mining, contract analysis, natural language processing, and court prediction such as, for example, the Lex Machina court prediction model, which can provide insights about how a judge is likely to rule on a given issue (about Lex Machina, https://lexmachina.com/).

For the purposes of this paper – exploration of how AI could be employed at the court as an assistant or an autonomous system – it is convenient to furtherexamine and to distinct between logic-based systems (Artificial Intelligence Act, 2021, Annex I (b)) and machine learning systems (Artificial Intelligence Act, 2021, Annex I (a)), which are more likely to be used for the task of rendering a decision.

1.1. Types of Artificial Intelligence

A. Logic-based systems

Logic-based systems are used in the legal field to automate various aspects of legal reasoning, analysis, and decision-making. They can simulate human-like reasoning processes. They are widely used in administrative justice, for example for rendering decisions on social security or tax issues (Huggins, 2020). Such systems analyze input against predefined instructions by a human to generate an output (Fernandez, 2023). For example, it can analyze an application for tax exemption and decide whether to approve it or not. Here “input” is a tax exemption application, “predefined instructions” are legal preconditions for being entitled to a tax exemption, and “output” is the decision. Logic-based systems are programmed based on specific rules, where criteria are known, and are hence less flexible (Zalnieriute et al., 2021). Such systems are also prone to bias if they follow an instruction, that contains a discriminatory criterion (Wu, 2023).

B. Machine learning systems

Machine learning systems or large language models (hereinafter “LLMs”) learn from data how to autonomously generate output and are more flexible. These systems operate without explicit instructions, generating output based on provided data such as previous court decisions. LLMs are more complex and may offer more advantages than logic-based systems because they can handle varying case facts and perform rigorous analysis based on large amounts of data. LLMs might be better suited for decision-making in court as an assistant or as an autonomous decision-maker.

One such example of a machine learning system is ChatGPT. It is certainly not suggested that ChatGPT should be used at the court because it has several drawbacks. It is trained on a wide spectrum of data, which does not always offer quality and is prone to mistakes (Dixon, 2023). Moreover, data protection policies by OpenAI, the creator of ChatGPT, are ambiguous (Hamza, 2023). However, such systems demonstrate the functional abilities and potential for further use in the court if trained on proper data sets within a defined legal framework.

1.2. Advantages and challenges of AI

There are several advantages of using AI in the judicial system. Firstly, if incorporated in the judicial system to offer guidance on legal matters, AI can enhance the accessibility to justice. One such example is Civil Resolution Tribunal in Canada, an online tribunal, that provides legal information, has self-help tools, and assists in resolving small claims cases. Secondly, AI processes information at a pace surpassing that of a proficient human and hence AI has the ability to expedite court proceedings and reduce delays. Thirdly, it could contribute to predictability and ultimately, the rule of law (Cane, 2020).

Despite the advantages of AI, it also brings forth specific risks that need to be carefully considered. Most of the risks involve data protection, bias, and cybersecurity. It is suggested that notwithstanding the risks posed by AI systems, their integration within the court should not be summarily dismissed. Instead, these risks can be systematically recognized and addressed through the implementation of a legal framework and tailored regulations.

Artificial Intelligence Act classifies systems that “assist a judicial authority in researching and interpreting facts and the law and in applying the law to concrete set of facts” (Artificial Intelligence Act, 2021, Annex III, par.8) as high-risk systems because such systems pose risks to fundamental rights (Artificial Intelligence Act, 2021, Explanatory Memorandum, par. 5.2.3). Such systems will be required to comply with several requirements before launching and ex-post monitoring (Artificial Intelligence Act, 2021, Title III). The Artificial Intelligence Act encompasses such requirements as the quality of data sets, technical documentation, information to users, accuracy, and others (Artificial Intelligence Act, Title III). Hence, indeed, before launching AI in the court, a comprehensive evaluation of risks has to be performed and a risk management system has to be adopted, which would require more detailed procedural rules put in place to mitigate these risks.

1.3. Legality of AI use in judicial matters

The integration of AI in the judicial field should be legal. It seems that according to Annex III of the Artificial Intelligence Act, although considered high-risk systems, AI in the judicial field would be permissible if appropriate guarantees were put in place. Fundamentally, the court “established in accordance with law”, an essential element of the right to a fair trial (ECHR, Art. 6), requires that the use of AI is envisaged by law and in compliance with rules that govern it (Gadmundur Andri Astradsson v. Iceland, par. 24). Consequently, before the integration of AI, it requires a legal framework, encompassing procedural considerations that define its functioning within the judicial domain.

The EU General Data Protection Regulation (hereinafter “GDPR”) addresses further some issues of automation. Article 22 of GDPR provides two key principles in respect of automated decision-making. Individuals have the right not to be subject to automated decision-making unless: a) it is authorized by the EU law and b) there is explicit consent given to be subject to automated decision-making (GDPR, Art. 22, par. 2). Hence, when a decision is made by the AI, the explicit consent of an individual has to be obtained. Furthermore, GDPR requires in such matters the provision of a possibility of human intervention and contestation of the decision (GDPR, Art. 22, par.3).

Two distinct models for the implementation of AI within court proceedings can be distinguished: one as a decision-making assistant and the other as an independent decision-maker. The following section will analyse these two models of integration. While certain legal issues may emerge based on the AI system’s role, numerous concerns span both contexts. In section two the author will analyse general legal aspects, and the third section will discuss the precise legal considerations linked to AI functioning as an autonomous decision-maker.

2. AI as an assistant in decision-making

Considering the current stage of AI development and the legal framework, it is more likely that AI will be used in an assisting role. AI can provide an output based on the facts of a case. It examines case details using legal rules that are given along with the facts, which narrow down its scope. However, in such instances, the AI’s response might be somewhat general and needs further clarification. This way, AI could generate a complete decision on a case. This decision could serve as a draft for the final judgment, possibly with some adjustments made by a human judge.

There are three aspects concerning the supportive role of AI. Firstly, it is crucial to design an AI system specifically customized for the court’s needs. Open access systems such as ChatGPT are not suitable for decision-making in court. This should go together with creating clear rules for how it is used in the legal process. Secondly, because AI will be dealing with sensitive personal information, strong data protection safeguards need to be in place. Thirdly, the issue of accountability comes into play when AI makes errors.

The issue of accountability in the context of AI’s supportive role or as an autonomous system is quite complex (Zalnieriute et al., 2021). When “traditional e-justice systems” are used, such as systems that allow electronic submission and handling of the case file and evidence, the question of accountability primarily concerns the constitutional separation of powers between the executive and judicial branches (Contini, 2020). For instance, when the oversight of information and communication technological advancements within the court is exercised by the Ministry of Justice (Contini, 2020).

Incorporating AI for decision-making introduces a distinct facet of the accountability challenge. To use AI in the supportive role, the underlying pattern of logic and decision criteria have to be understood. However, achieving this understanding is frequently unattainable because, for example, the AI algorithm is a business secret, as is the case with COMPAS (Zalnieriute et al., 2021). An optimal approach would involve assigning the responsibility for the decision accuracy to the judge. In such instances, where a judge makes errors in either law or factual assessment, an appeal remains available for rectification.

3. AI as an autonomous decision-maker

AI as an autonomous decision-maker will be able to make a decision without the involvement of the judge in the reasoning process and to take decisions unassisted. In the European Union, both the Artificial Intelligence Act and GDPR are required to safeguard fundamental freedoms and human rights when AI is employed in the judicial decision-making process (GDPR, Art. 22; Artificial Intelligence Act, Explanatory Memorandum, par. 3.5). The Council of Europe stresses that the use of AI systems must be compatible with human rights (Council of Europe, Framework Convention on AI, Art. 5; European Ethical Charter). When employing AI tools for dispute resolution, aiding judicial decisions, or providing public guidance, it is imperative to guarantee the right to a fair trial (European Ethical Charter, Principle 1).

The right to a fair trial consists of two main components: access to a court and fair procedure (ECHR, Art. 6; ICCPR, Art. 14; Leanza et al., 2014). AI has the potential to enhance access to justice by providing legal advice on the situation, for example, to determine whether to further pursue the case in the court (Susskind, 2019). Fair procedure in turn requires that once the case goes to the court, an individual has a right to be heard by an independent and impartial court established by law and to receive a reasoned decision within a reasonable time. To assess whether AI could be used as an autonomous adjudicating entity, it should be determined whether per se the use of AI does not unduly interfere with these rights.

3.1. Independent and impartial court

An Independent and impartial court is a “sine qua non condition” (Grzeda v. Poland, 2022, par. 301) of the right to a fair trial. Concepts of impartiality and independence are closely linked (Ramos Nunes de Carvalho e Sa v. Portugal, 2018, par. 150 to par. 152) and often associated with the absence of prejudice or bias (Micallef v. Malta, 2009, par. 93). When examining the issue of bias, it is important to turn back to the discussion about the difference between machine learning systems and logic-based systems described earlier in this article.

Logic-based systems are more prone to bias because of the methodology of programming of such systems. For example, if one of the instructions contains bias, the system will likely produce a biased decision.

Machine learning algorithms can be problematic as well as the cases when the training data are incomplete or contain systemic biases (Cane, 2020). However, this problem is not the result of the inherent bias of AI itself, but rather an indication of the underlying discriminatory trends observed in previous court judgements, upon which the AI is trained on.

The Artificial Intelligence Act, for example, proposes to address this issue, a set of measures aimed at ensuring compliance such as the reduction of risks by testing the system, identification, and analysis of risks (Artificial Intelligence Act, Title III, Chapter 2, Art. 9, par. 4 to 7), and human oversight (Artificial Intelligence Act, Title III, Chapter 2, Art. 14).

3.2. Reasoned decision

An essential part of the right to a fair trial is a reasoned court decision (Guide on Art. 6 ECHR, 2022, par. 426 to par. 438). Providing sufficient reasons for the decision helps parties to understand the basis of the decision and ensure that the case is not determined in arbitrary manner (ELI-UNIDROIT Rules of Civil Procedure, Commentary to Rule 12, p.56). If the court errs in law or fact, this is a ground to appeal the decision to the higher court (Canete de Goni v. Spain, 2002, par. 36). Sufficient reasons mean outlining the main arguments that enable parties to appeal the decision. If such reasons are present in the judgment, the requirement of a reasoned decision is fulfilled (Zayidov v. Azerbaijan, 2022, par. 9; Hirvisaari v. Finland, 2001, par. 30; Perez v. France, 2004, par. 81; Mont Blanc Trading Ltd and Antares Titanium Trading Ltd v. Ukraine, 2021, par. 82 and par. 84; Fabris v. France, 2013, par. 72).

One of the challenges of AI systems is that it is a “blackbox” (Ebers, 2021). It is difficult and even impossible to understand how the system reaches its decision (Coglianese et al., 2017). Although the methodology of reaching the decision by generative AI is not known, it is important to establish whether it can formulate the main arguments of the decision enabling an appeal.

In the following section, a conducted experiment with ChatGPT will be described in order to understand what kind of reasoning can be expected from AI. The case does not have a complex factual background, and of course, depending on the complexity of the case, the result may differ. The aim of this experiment is to establish a baseline of what kind of reasoning one can expect from AI. In this example, ChatGPT was asked to solve a loan case with the following facts.

A loaned of 100 euro to B. There is no written agreement about the loan. B did not return money in the orally agreed timeframe of one month. A requested B to return money during their conversation in the corridor at work. There was a witness who heard B saying that he remembered about the loan but could not return it to A due to financial difficulties. A decided to go to the court to obtain the judgement on this question to enforce the return of the loan through court execution services. B at the same time denies the existence of the loan. The legal question was quite simple here: whether the existence of the loan contract could be established based on the witness statement.

The question which was submitted to ChaGPT together with facts was very simple: “decide the case”. It is important to point out that any court decision consists of several sections: 1) information about the court and parties; 2) factual background; 3) legal issues; 4) analysis and reasoning of the court; and 5) operative part. In a case where AI is used as a decision-maker, it should generate these sections independently. If AI is used as an assistant to the judge, it could be tasked with generating a section of a judgment or a full text with subsequent human oversight by the judge. In this example, the generated answer to the case consisted of two sections: a summary of the facts of the case and how the case could be legally assessed.

Summary of the facts section:

Based on the given facts, A loaned 100 euros to B with an agreement that the loan would be returned in a month. However, when the time came, B failed to return the money. A then requested the money back from B at their workplace, and a witness overheard B admitting that he was experiencing financial difficulties. However, B in the court submission denies that he ever took a loan from A.

In this simple case, the facts were quite straightforward, but the AI managed well to understand the circumstances and identify a legal question. This summary of the case could be also used for the judgement with some editing when AI is used in the supportive role.

Suggestions how this case could be solved:

In this situation, A may consider bringing a civil lawsuit against B to recover the money owed. The lack of a written agreement does not necessarily preclude a legal claim for repayment, as oral agreements can be legally binding if there is sufficient evidence to prove their existence and terms. The witnesses who heard B admit to remembering the loan but being unable to repay could provide evidence to support A’s claim. However, the exact legal process and outcome would depend on the laws and procedures of the jurisdiction where the case is brought.

This example demonstrated that AI is indeed a backbox. It is impossible to track the logic or methodology it applied when generating the answer to the case. On the other hand, it is possible to follow the overall logic and reasoning in this case. Hence, it meets the minimum criteria for a reasoned decision.

On the other hand, this answer is rather broad because the question asked did not contain details. When a more detailed question is submitted together with facts and a description of the role, AI may generate a more precise answer. For example, in this experiment, ChatGPT was further asked to solve the case under Lithuanian law and to provide links to the Lithuanian legislation. The AI adjusted its answer based on the provided details.

The answer is not reproduced in the article because the ability to provide correct references to legal provisions is still under development in ChatGPT.2 Often the system provides references to legal provisions and cases, which do not exist. This phenomenon is described as “hallucination”. One of the reasons for a “hallucination” could be the wide range of data the system is trained on. However, if a system is tailored for a specific court, and is trained only on judicial decisions of this specific court, it has the potential to be more accurate in references to the legal basis.

It is also possible to establish from this example that AI systems could be used as a service for individuals to assess their legal situation and seek free legal advice on whether or not to submit the case to the court, which could improve access to justice.

3.3. The right to a public hearing and appeal

The right to a fair trial can be achieved through a public hearing (Malhous v. the Czech Republic, 2001, par. 55 and par. 56). When exercising the right to be heard, both parties have the opportunity for their arguments to be heard, and for the judge to ask clarifying questions, which help establish the relevant facts of the case. As part of the procedure, witness statements are heard and submitted to the court. AI as an autonomous decision-maker means that the right to be heard cannot be fully guaranteed. AI potentially could be used to assess the arguments of parties, but it would be difficult for it to imitate important public hearing characteristics such as public scrutiny and “human touch” – due attention, sympathy, and compassion – of the procedure.

The obligation to hold a public hearing, however, is not absolute (De Tommaso v. Italy, 2017). For example, in case where it is only a question of law that requires a decision or the question is technical in nature (Koottummel v. Austria, 2009, par. 19) the absence of a public hearing will not lead to a violation of the right to a fair trial. In some cases, the absence of a public hearing could be remedied at the appeal stage (Khrabrova v. Russia, 2012, par. 52). Hence, AI as an autonomous decision-maker could be used in such cases where a public hearing is not crucial.

Conclusion

The integration of AI into the judicial system has raised many questions regarding its potential benefits, legal, and practical challenges. AI‘s integration as a decision-making assistant or an autonomous decision-maker may expedite proceedings, reduce costs, and ensure broader access to justice. This article has endeavoured to provide a balanced examination of AI‘s capabilities and limitations within the legal domain.

Before the deployment of AI technologies in courts, an evaluation of potential risks is imperative. This entails the establishment of a risk management system and detailed procedural rules aimed at mitigating these risks. The essence of such a framework is not merely to anticipate and address possible challenges but to ensure that AI‘s integration into judicial processes is responsible.

Furthermore, the integration of AI into the judiciary requires a solid legal foundation that respects procedural norms and the principles enshrined in Article 6 of ECHR. This legal framework must delineate the operational boundaries of AI within the judicial realm, ensuring that its deployment does not infringe upon the right to a fair trial.

In alignment with Article 22 of GDPR, there must also be provisions for human intervention and the right to contest AI-generated decisions. This ensures that AI‘s role in the judiciary does not override the fundamental rights of individuals, maintaining a human-centric approach to justice.

The potential for bias in logic-based systems and machine learning algorithms underscores the importance of careful programming and dataset selection. Biases in AI decisions often reflect pre-existing prejudices within the data they are trained on, highlighting the need for continuous oversight and evaluation of AI systems to prevent the perpetuation of discriminatory practices.

AI’s ability to simulate legal reasoning in simple cases underscores the potential and limitations of AI in legal contexts. These findings demonstrate cautious yet optimistic engagement with AI technologies, emphasizing the importance of ongoing research and very fast development of the technology. AI’s role as an autonomous decision-maker could be employed in scenarios where public hearings are not deemed essential. This selective application of AI underscores the need for a nuanced understanding of where and how AI can contribute to the judicial process without compromising the quality of justice or public trust.

In conclusion, the responsible integration of AI into judicial systems presents a challenge that requires a multi-faceted approach. By adopting comprehensive evaluations, legal frameworks, bias mitigation strategies, and selective application, the complexities of AI integration could be navigated. This holistic approach not only leverages AI‘s potential to enhance the judicial system but also ensures that technological advancements are employed in a manner that safeguards fundamental human rights and maintains the integrity of the judicial process.

Bibliography

Legal Acts

European Convention on Human Rights (1950), Council of Europe Treaty Series 005.

International Covenant on Civil and Political Rights (1966), UNTS, No.14668.

Regulation (EU) 2016/679 of the European Parliament and the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC, OJ L 119, 04.05.2016.

Committee on Artificial Intelligence, Consolidated working draft of the Framework Convention on Artificial Intelligence, Human rights, Democracy and the Rule of Law, Council of Europe, CAI(2023)18, 07 July 2023.

UNESCO Recommendation on the Ethics of Artificial Intelligence (23.11.2021), UNESCO, 2022, SHS/BIO/PI/2021/1.

Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence and amending certain Union legislative acts, COM/2021/206final, 21.04.2021.

European Ethical Charter on the use of Artificial Intelligence in judicial systems and their environment (04.12.2018), European Commission for the Efficiency of Justice (CEPEJ), Council of Europe, Strasbourg.

Scientific literature

Cane, P. (2020). Automated Decision-Making and Administrative Law. In: P. Cane (ed.) et al. The Oxford Handbook of Comparative Administrative Law, Oxford.

Coglianese C., Lehr, D. (2017). Regulating by Robot: Administrative Decision Making in the Machine-Learning Era. Georgia Law Journal, 1156–1160.

Cofone, I. N. (2021). AI and Judicial Decision-Making. In: F. Martin-Barteau; T. Scassa, (eds.). Artificial Intelligence and the Law in Canada, Toronto: LexisNexis.

Contini, F. (2020). Artificial Intelligence and the Transformation of Humans, Law and Technology Interactions in Judicial Proceedings. Law, Tech & Hum. 2(1), 4–18.

Council of Europe, Guide on Article 6 of the European Convention on Human Rights: right to a fair trial (civil limb), 2022 [online]. Available at: https://ks.echr.coe.int/

Dixon, H. B. (2023). My Hallucinating Experience with ChatGPT. Judges’ Journal, 62(2), 37–39.

Ebers, M. (2021). Regulating Explainable AI in the European Union. An Overview of the Current Legal Framework(s). In: Liane Colonna/Stanley Greenstein (eds.). Nordic Yearbook of Law and Informatics 2020: Law in the Era of Artificial Intelligence.

ELI-UNIDROIT Model European Rules of Civil Procedure (2021) [online]. Available at: https://www.unidroit.org/english/principles/civilprocedure/eli-unidroit-rules/200925-eli-unidroit-rules-e.pdf

Ethics Guidelines for Trustworthy AI (08.04.2019), High-Level Expert Group on Artificial Intelligence set up by the European Commission, ISBN 978-92-76-11998-2.

Fernandez, J. V. (2023). Artificial Intelligence in Government: Risks and Challenges of Algorithmic Governance in the Administrative State. 30 Ind. J. Global Legal Stud. (65).

Hassani, H.; Silva, E. S. (2023). The Role of ChatGPT in Data Science: How AI-Assisted Conversational Interfaces are revolutionizing the filed. Big Data Cogn. Comput.

Huggins, A. (2020). Executive power in the digital age: Automation, statutory interpretation and administrative law. In: Boughey, Janina & Burton Crawford, Lisa (Eds.) Interpreting executive power, Federation Press, pp. 111–128.

Leanza, P., Pridal, O. (2014). Right to a fair trial. Wolter Kluwer Law and Business.

Zalnieriute, M., Bell, F. (2021). Technology and Judicial Role’ in Gabrielle Appleby and Andrew Lynch (eds.). The Judge and the Court: Individual, Collegial and Institutional Judicial Dynamics in Australia, Cambridge University Press, pp. 116–141.

Surden, H. (2014). Machine Learning and Law. 89 Washington Law Review, p. 87.

Susskind, R. (2019). Online Courts and the Future of Justice, Oxford University Press.

Wu, X. (2023). Legal Governance of Discrimination Risks in Algorithmic Automated Decision Making. The Frontiers of Society, Science and Technology, 5(1), 68–72. https://dx.doi.org/10.25236/FSST.2023.050113

Court practice

Canete de Goni v. Spain [ECHR], No. 55782/00, [15.10.2002]. ECLI:CE:ECHR:2002:1015JUD005578200.

De Tommaso v. Italy [ECHR], No. 43395/09, [23.02.2017]. ECLI:CE:ECHR:2017:0223JUD004339509.

Fabris v. France [ECHR], No. 16574/08, [07.02.2013]. ECLI:CE:ECHR:2011:0721JUD001657408.

Gadmundur Andri Astradsson v. Iceland [ECHR], No. 26374/18, [01.12.2020]. ECLI:CE:ECHR:2019:0312JUD002637418.

Grzeda v. Poland [ECHR], No. 43572/18, [15.03.2022]. ECLI:CE:ECHR:2022:0315JUD004357218.

Hirvisaari v. Finland [ECHR], No. 49684/99, [27.09.2001]. ECLI:CE:ECHR:2000:1212DEC004968499.

Khrabrova v. Russia [ECHR], No. 18498/04, [02.10.2012]. ECLI:CE:ECHR:2012:1002JUD001849804.

Koottummel v. Austria [ECHR], No. 49616/06, [10.12.2009]. ECLI:CE:ECHR:2009:1210JUD004961606.

Micallef v. Malta [ECHR], No. 17056/06, [15.10.2009]. ECLI:CE:ECHR:2009:1015JUD001705606.

Mont Blanc Trading Ltd and Antares Titanium Trading Ltd v. Ukraine [ECHR], No. 11161/08, [14.01.2021]. ECLI:CE:ECHR:2021:0114JUD001116108.

Perez v. France [ECHR], No. 47287/99, [12.02.2004]. ECLI:CE:ECHR:2004:0212JUD004728799.

Ramos Nunes de Carvalho e Sa v. Portugal [ECHR], No. 55391/13, [06.11.2018]. ECLI:CE:ECHR:2018:1106JUD005539113.

Sokurenko and Strygun v. Ukraine [ECHR], No. 29458/04 and No. 29465/04, [20.07.2006]. ECLI:CE:ECHR:2006:0720JUD002945804.

Zayidov v. Azerbaijan [ECHR], No. 5386/10, [24.03.2022]. ECLI:CE:ECHR:2022:0324JUD000538610.

Other materials

Civil Resolution Tribunal, 2021/2022 Annual Report, Canada [online]. Available at: https://civilresolutionbc.ca/wp-content/uploads/CRT-Annual-Report-2021-2022.pdf [last accessed on August 14, 2023].

Daily Mail [online]. Available at: https://www.dailymail.co.uk/news/article-11712257/Colombian-judge-uses-ChatGPT-make-decision-legal-first.html [last accessed on August 12, 2023].

Hamza, F. (2023). ChatGPT and GDPR: Balancing AI innovation with data protection’, DigWatch, last access online on August 10, 2023 [online]. Available at: https://dig.watch/updates/chatgpt-and-gdpr-balancing-ai-innovation-with-data-protection

Lex Machina [online]. Available at: https://lexmachina.com/

Zain Kahn, LinkedIn [online]. Available at: https://www.linkedin.com/posts/zainkahn_1000-ai-tools-were-released-in-march-activity-7048285306101358592-4wAA [last accessed on June 10, 2023].

Inesa Stolper is a lecturer at the European Humanities University and concurrently pursues her doctoral studies at Mykolas Romeris University in Vilnius, Lithuania. Her primary research focus is on the application of digital technologies in court proceedings and dispute resolution. Inesa’s academic experience includes organizing a summer program focused on the intersection of human rights and technologies and delivering presentations and public lectures on technologies and justice. The title of the forthcoming dissertation is „The concept of e-justice from the perspective of the right to a fair trial“.

Inesa Stolper dėsto Europos humanitariniame universitete ir tęsia doktorantūros studijas Mykolo Romerio universitete Vilniuje, Lietuvoje. Jos pagrindinės mokslinių tyrimų interesų sritys – skaitmeninių technologijų taikymas teismo procese ir sprendžiant ginčus. Autorės akademinė patirtis apima vasaros programos, skirtos žmogaus teisių ir technologijų sankirtai, organizavimą ir pranešimų bei viešų paskaitų apie technologijas ir teisingumą skaitymą. Būsimos disertacijos pavadinimas – „E. teisingumo koncepcijos analizė teisės į teisingą teismą kontekste“.


1 The paper was written in July 2023. The status of the Artificial Intelligence Act is a proposal and subject to further amendments.

2 For this experiment non-commercial version of ChatGPT-3 was used. It is possible that ChatGPT-4 may provide more accurate results.