Teisė ISSN 1392-1274 eISSN 2424-6050

2022, Vol. 122, pp. 150–158 DOI: https://doi.org/10.15388/Teise.2022.122.10

Legal Personhood for Artificial Intelligence: Pro, Contra, Abstain?

Kateryna Militsyna
PhD student at the Private International Law Chair
of the Institute of International Relations
of Taras Shevchenko National University of Kyiv
36/1 Y. Illienka, Kyiv, Ukraine
Department phone (044) 481-44-43
Email: kafedra344@ukr.net

This article is about the legal personhood of artificial intelligence as one of the existing options of regulating AI and coping with the challenges arising out of its functioning. It begins with the search for the definition of AI and goes on to consider the arguments against the legal personhood of AI, the options of such a legal personhood, and the factors taken into account in devising the legal personhood of AI. The article ends with our vision of the legal personhood of AI.
Keywords: artificial intelligence, legal personhood, electronic personhood, civil law.

Dirbtinio intelekto juridinis asmuo: už, prieš, susilaikyti?

Šiame straipsnyje rašoma apie dirbtinio intelekto juridinio asmens statusą kaip vieną iš esamų dirbtinio intelekto reguliavimo galimybių ir būdą susidoroti su iššūkiais, kylančiais dėl jo veikimo. Pradedama nuo dirbtinio intelekto apibrėžties paieškų, toliau nagrinėjami argumentai už ir prieš dirbtinio intelekto juridinio asmens statusą ir veiksniai, į kuriuos turėtų būti atsižvelgta kuriant dirbtinio intelekto juridinio asmens statusą. Straipsnis baigiamas autorės nuomone dėl dirbtinio intelekto juridinio asmens statuso.
Pagrindiniai žodžiai: dirbtinis intelektas, juridinis asmuo, elektroninis asmuo, civilinė teisė.

__________

Received: 30/09/2021. Accepted: 17/01/2022
Copyright © 2022 Kateryna Militsyna. Published by
Vilnius University Press
This is an Open Access article distributed under the terms of the
Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Introduction

Artificial intelligence (AI) is a technology that originated in the 1950s but remained in a “dormant” state until recently, when it began to scale at a rapid pace. Despite such a seemingly big leap, there is a common belief that we are at an empirical stage in the field of AI at present. We create, test, observe and analyze the things we witness.

However, even at this initial stage, AI has already presented many challenges to humanity. Some of them lie in the plane of law and are closely related to the issues of morality, ethics, religion, etc. In particular, the issue of AI perception, as well as the results of its activities, are among such challenges. Considering that “Al entities are designed to operate at an increasing distance from their developers and owners”, accountability gap is another one (Banteka, 2021, p. 539 citing Koops, et al., 2010, p. 517). It is extremely important to find an approach to such challenges that is perceived as universal. In the absence of such an approach, we experience turbulence caused by radically different solutions that are already being adopted by states. In the legal plane, this causes great uncertainty. In the future, states with different approaches may refuse to accept those of other states, which will will increase jurisdictional issues as well as problems associated with the recognition and enforcement of court decisions.

The legal personhood of AI is one of the solutions to the abovementioned challenges and the subject matter of this article. The concept of AI legal personhood was also highlighted in the works of A. Foerst, P. L. Lau, M. Laukyte, M. Simmler, N. Markwalder, U. Pagallo and other scholars. In our article, we will consider the arguments made against the legal personhood of AI, the options of such a legal personhood, and what is taken into account in devising AI legal personhood. We will analyze whether AI legal personhood is an adequate, effective and timely solution that will cope with the identified challenges in the legal plane. However, before that, we will dwell on what artificial intelligence is. This will allow for a more thorough approach to the parts of the article regarding the legal personhood of AI. Finally, the article will conclude with our vision of AI legal personhood based on the results obtained via analytical and comparative methods.

1. What is AI?

The answer to our question “Legal personhood for artificial intelligence: pro, contra, abstain?” depends directly on how we define AI.

Logically, with AI development, the answer to the posed question may also change, so the author takes the opportunity to make a reservation that if new AI capabilities, which are not even supposed or predicted at the moment, appear, the answer may differ from the conclusions presented in this article and may need modification.

Now we proceed to figuring out what we mean by AI when we try to find the answer to the main question of the article.

1.1. AI is more than physically embodied robots

Many scientific studies that raise the issue of “electronic personhood” focus on physically embodied robots – humanoid robots (Dremliuga, et al., 2019, p. 105; Lau, 2019, p. 49; Simmler, et al., 2019, pp. 6–7). In the AI hierarchy, humanoid robots will probably still take the first place, at least in the societal perceptions of artificial intelligence, since they are considered to be the most difficult to develop at the moment.

In turn, the desire of scientists to reach a level of strong AI in physically embodied robots is related to such factors.

First, people accept the ones of their kind better. The more a robot resembles the people it works for, the more these people are able to project onto it phenomena such as friendship, warmth, empathy, etc. Second, it is a human’s world in the sense that modern humans adjust everything around them to suit their desires and needs. If a robot’s body reflects human dimensions, it will be much easier for it to navigate through human households (Foerst, 1999, p. 374).

There is also a view that a robot is the best way through which to visualize and grasp the presence and functioning of AI among us, because it is through the mind’s eye that we can best understand an abstract concept. Mass media also play an important role. Thanks to them, robots are becoming a familiar idea, and we are accordingly familiarizing ourselves with their increasing role in the environments we inhabit (Laukyte, 2021, p. 446). Moreover, according to general studies in the field of human-computer interaction, physical embodiment has positive effects on the feeling of an artificial agent’s social presence (Somaya, et al., 2018, p. 279 citing Lee, et al., 2006). In this regard, it is interesting that using anthropomorphic language (such as personified names) in relation to robots can impact how we perceive and treat them (Ibid, p. 278 citing Darling, 2015). Probably, it is due to this fact that we have humanoid robots the likes of Sophia, Ai-Da, Grace, etc.

Another reason for the scientists’ yearning for embodiment is the belief that intelligence, according to the embodiment thesis, cannot be implemented on a disembodied machine, as it emerges only in minds that are embedded in a world (Foerst, 1999, p. 377). However, in this case, we stumble upon two obstacles: the question of what “intelligence” is and, depending on the answer, whether machines can possess it.

In this article, we are not limited to physically embodied robots. When looking for an answer to the main question of the article, we are guided by the characteristics and criteria of AI.

1.2. AI: betwixt and between narrow and general

It is common to distinguish between narrow AI (ANI) and general AI (AGI). Despite the lack of clear parameters of these concepts, today, when faced with AI, we deal with ANI.

ANI.

The characteristics that distinguish ANI from the technologies that existed before include high computing power and autonomy. The latter is manifested through self-training, the ability to self-learn by accumulating personal experience, and generating solutions to problems based on an independent analysis of various scenarios without the input of a developer (Banteka, 2021, p. 544 citing Čerka et al., 2015, p. 378). Though AI may produce results that resemble or even exceed human ones, in case of ANI such an autonomy is limited to a given subject area, i.e. ANI is confined to a task it is designed to perform and may not generalize a solution to produce AI behavior of general application across different tasks (Ibid, p. 543 citing Brundage, et al., 2018, p. 7). The latter is responsible for the narrowness of ANI and separates it from the general AI.

AGI.

There are different versions of what should be considered the threshold of the transition from narrow AI to general AI.

As a criterion for identifying general AI, Victor Kantor offers the AI’s ability to “guess”, to understand what is not accurately formulated in the task, i.e. the ability to independently fill in this inaccuracy (Kantor, 2020). In search of criteria for AI as a hypothetical absolute, to which we strive, Tatiana Shavrina gives the following criteria: i) multimodality (the ability to receive information from different sources and process everything together); ii) being multidomain (the ability to work equally well in different subject areas and gradually explore new ones); and iii) the ability to autonomously acquire new skills (Shavrina, 2020).

The author of this article pays particular attention to the test proposed by Anatolii Starostin and based on the Turing test. Starostin’s test is as follows. A human communicates with the machine through messages using natural language. In the course of the conversation, the person teaches the machine to play some game. It can be an already existing game or the one invented by the person. It is essential that in the initial state the machine does not know this game and its rules. If during the conversation a machine can learn the rules of the game and start playing with the person (not necessarily win), the machine passes the test (Skorinkin, et al., 2020). Having greatly simplified, we can conclude that general AI is the AI that can use knowledge and experience from one area to apply them in another on its own.

At the same time, even if AI meets these criteria, this will not automatically presume intelligence in the sense of human intelligence. AI is also unlikely ever to acquire consciousness. Rather, there may be a manifold increase in its computational capabilities, the creation of a language for the interaction among networks or of one universal language or another attempt of “artificial” imitation of humans. Therefore, at the moment we are still in the stage of narrow AI, which has already posed such challenges as the impossibility of explaining its decisions, self-training, being limited to a narrow area, the scale of its computational powers, etc. The personhood of AI is one of the solutions that are proposed today.

2. Arguments against legal personhood of AI

The political willingness to act may be present, but the vision is not.

Nathalie Smuha (Smuha, 2020)

Endowing artificial intelligence with legal personhood is an idea that causes a lot of controversy. Arguments against the legal personhood of AI often include the following ones.

2.1. Ethical-moral-religious arguments

The most heated discussions take place at the crossroads of law, ethics, morality and religion. There is an opinion that the endowment of AI with legal personhood is not limited exclusively to the legal plane; therefore, we should stop using the fiction of the person for something that is not a person in the most original and primary sense of this word (Laukyte, 2021, p. 445).

There are also obstacles to furthering the idea of AI personhood on the side of religion. For example, Western Christianity has always lived with the motif of hubris as a sin ingrained in the social consciousness (Foerst, 1999, p. 376). Accordingly, only God can create.

Given that the law largely employs folk psychology (Banteka, 2021, p. 563 citing Morse, 2004, pp. 371–373), it is also worth noting that the last one puts forward the argument that there is “a certain intangible ‘something’ that is essential for personhood: be it mind, soul, feelings, intentionality, consciousness, or free will.” In the absence of this something “it is difficult for the commonsense human to conceptualize personhood” (Banteka, 2021, pp. 563–564).

In the paragraph below, we will dwell in more detail on such an intangible something that is inherent in humans.

2.2. Comparison with the legal personhood of humans

Continuing the line of the above arguments, we proceed to arguments that oppose AI personhood based on a comparison with the legal personhood of humans.

The main arguments against AI as a person of law rely on a lack of some vital elements of human legal personhood – the missing-something arguments (Dremliuga, et al., 2019, p. 106 citing Solum, 1992, p. 1262). Scholars emphasize that unlike humans, AI has no consciousness or intentionality (Lau, 2019, p. 56), feelings, desires, interests, creativity or any other human qualities (Dremliuga, et al., 2019, p. 106).

Again, we should point out that this argument is based on a comparison with the legal personhood of humans (Lau, 2019, p. 56). At the same time, Pin Lean Lau also cites the theory put forward by Alexis Dyschkant, according to which legal personhood should not simply be made contingent on humanity; and that we should “divorce the capacities-focused definition of legal personhood from the species-based definition of humanity” (Lau, 2019, p. 57 citing Dyschkant, 2015, p. 2075).

Speaking of AI personhood, in this article we do not claim that the scope of AI personhood should be equal to the scope of personhood that individuals possess due to the fact that they are human beings. A comparison with the legal personhood of entities is more appropriate here. However, there is also an opinion that in placing corporations under the same rubric we use for humans – namely, persons – we have endowed the corporations with powers which they could later use against humans (Laukyte, 2021, p. 450). Endowing AI with personhood will only complicate the situation. To this end we should place AI and other entities under a new metaphor of the intelligent machine (Laukyte, 2021, p. 445).

Having said that, we continue to search for an answer to the main question, “Legal personhood for artificial intelligence: pro, contra, abstain?”

2.3. Social realities

Analyzing the issues of criminal responsibility and the legal personhood of humanoid robots, Monika Simmler and Nora Markwalder argue that the intuitive search for requirements like “consciousness” or a “sense of self” does not refer to biophysical categories, but to social categories that describe which traits we attribute to persons to derive responsibility. Personhood and responsibility are the products of the social system. The category “person” (not human) is the one created by and for the social system (Simmler, et al., 2019, pp. 17–18). Thus, Monika Simmler and Nora Markwalder make the criminal responsibility and legal personhood of robots dependent on social recognition. From this they conclude that e-personhood may be a consequence of social recognition, and not its cause or basis (Ibid, p. 20). There is also an opinion which does not state that social recognition is necessary or enough for legal personhood, but finds that the lack of social recognition is a crucial obstacle for untypical legal persons (Dremliuga, et al., 2019, p. 110).

Indeed, the fact that we face AI in our everyday life does not mean that AI has gained social recognition as a legal person. But can we say that the social recognition of legal entities occurred before endowing them with personhood? Have legal entities received true social recognition by now? Perhaps we have agreed on such a legal convention for the sake of development and legal certainty.

2.4. Civil law v. Criminal law

It is believed that the concept of e-personhood may be possible in the context of civil law but not criminal law. This is due to the fact that civil law mainly deals with monetary compensation, while criminal law – with punishment (Simmler, et al., 2019, p. 19). The author agrees that, given the peculiarities of criminal law, at the moment we can talk about the “truncated” legal personhood of AI that is not covered by criminal law.

Making an interim conclusion, we can say that we have briefly outlined the arguments against the legal personhood of AI and will take them into account in the further paragraphs of this article.

3. Personhood of AI: which way to choose?

As a concept, the personhood of AI raises a lot of controversies both around it and “inside” it. In other words, there is no unanimity about the “expression” of this concept even among those who accept the idea of personhood favorably.

Whichever expression is chosen, it seems to us that the starting point should be the idea that we do not have to fit AI into the existing conceptual boxes of person or property. Instead, as David J. Gunkel suggests, it might be prudent to begin to devise a more nuanced moral and legal ontology, one that recognizes that the world is not binary and that responding to the opportunities and challenges presented in the face of others requires us to think otherwise (Gunkel, et al., 2021, p. 482).

All this means the opportunity to choose how to “fill” the concept of e-personhood, which can cover not only AI. Such a choice the 1) must take into account what already exists (legal personhood of individuals and legal entities) in order to prevent contradictions, disagreements, and violations, 2) may consider the regimes that already exist for devising its own, but 3) is not obliged to copy them. Now we turn to possible options of e-personhood.

Roman Dremliuga et al. suggest that in the case of civil application of AI there are two options. AI could be as a legal person or as an agent of business relations with other legal persons. Both options presuppose that AI becomes a full-right participant in civil law relations with the ability to conclude deals, sell something, and provide services by its own (Dremliuga, et al., 2019, pp. 108–109). Given that AI does not have intentions and desires in terms of human understanding of these terms, the researchers consider there should be a proxy who represents the robot’s will (Dremliuga, et al., 2019, pp. 109–110).

There is also an opinion that the idea of full legal personhood lacks timeliness and adequacy. Ugo Pagallo stresses that the reason why legal systems should not confer legal personhood on “purely synthetic entities” has to do with moral grounds and abuse of the legal person status by robots and those that make them, i.e., either robots as liability shields, or robots as themselves unaccountable rights violators (Pagallo, 2018, p. 4). He suggests we should distinguish between personhood and agenthood (Pagallo, 2018, p. 1). Agenthood is a more acceptable option than likening the status of AI to the personhood of a legal entity (Pagallo, 2018, pp. 5–6). We can consider new forms of accountability and liability for the activities of AI robots in contracts and business law, e.g., new forms of legal agenthood in cases of complex distributed responsibility, registries for artificial agents, insurance policies, or modern forms of the ancient Roman legal mechanism of peculium, namely, the sum of money or property granted by the head of the household to a slave or son-in-power (Pagallo, 2018, pp. 1–6).

However, creating a modern form of peculium is fraught from a psychological point of view and may impact relations between humans, because we will rightly see the behavior of our AI as a consumable product rather than an expression of personal life, but our instinctive empathy for our AI tools will make us experience them as if they were natural persons – whose behavior we consume (Gunkel, et al., 2021, p. 481).

4. Our concept of legal personhood of AI

Given the current level of AI, our concept is also based on the idea that the time is not ripe for endowing AI with a “full” (encompassing criminal law) legal personhood. If we do consider the option of legal personhood of AI, for now we can only talk about a “truncated” version of such legal personhood.

The truncation of the AI legal personhood consists in the fact that:

In doing so, it should be emphasized that endowing an AI with the incidents of legal personhood that enable it to function as an independent commercial actor does not bespeak any acceptance of the notion that Ais are endowed with ultimate value. The legal personhood of an AI can rather serve various purposes that might have nothing to do with the AI itself, such as economic efficiency or risk allocation (Kurki, 2019, p. 189).

Thus, in developing the legal personality of AI, from our point of view, we should follow such practical steps.

We need to define “absolute no”, i.e. directions of AI development, AI functions and characteristics, which, if achieved, can be dangerous. For example, if some AI technology is considered to be on the verge of becoming AGI, it must not be connected to the global network until careful testing is done.

For the purpose of identifying the cause of the AI malfunction, tracking the impact of the environment on the technology and preventing accidents in the future, AI must be equipped with a “black box”. In the case of AI, the black box is a conditional name. By this we mean a technology that will be analogous to existing black boxes and will serve the purposes described above.

AI technologies, which, due to their characteristics, after applying the technology-specific approach, will be recognized as such that are a source of danger, must be subject to compulsory insurance. An owner, through regular contributions, must create and fill a reserve fund until the fund reaches the amount that will be established by law. The money of the reserve fund will be used in cases where the insurance payment will not fully cover the losses. If the AI is sold to a new owner, the latter also “receives” the fund. If money is taken from the fund to compensate for the losses, the owner will have to fill it in the amount that is necessary until the fund reaches the amount established by law.

Also, dangerous AI technologies must be registered. There are 3 options for such registration. The first option is the national one. Its essence lies in registration in every country. The second option is the transnational one. It implies registration in one country, as well as the creation of an international register, which will contain information about the country of registration of the AI technology. If this option is taken, there will be a need to resolve issues that lie in the plane of private international law. The third option is the international one. It implies the creation of one international register. The attractiveness of creating one international register is increased by the fact that different states have different levels of openness of data on legal entities, which may bring about other challenges. If the world comes to a decision to coordinate its actions on the development of AI, resulting in the creation of a single international organization, the register can be maintained under the auspices of such an organization. Although the latter thought seems utopian at the moment.

Conclusions

Legal personhood of AI is one of the options put forward today to cope with the existing challenges in the field of AI. Some argue that the adoption of this concept will open a Pandora’s Box, from which more challenges and questions than currently exist will disperse. Some, on the contrary, insist that we are only talking about personhood from a legal point of view. Therefore, the adoption of this concept does not equate robots with humans, but only provides the relations where AI is present, with a framework so that they become clear and predictable.

In this article, we have tried to determine what issues should be taken into account if the concept of e-personhood (which may cover AI and similar technologies) prevails over other proposals. In our opinion, at present, such a personhood should exclude criminal law and be founded on a technology-specific approach. In the case of AI technologies, which after applying the approach, will be considered as “dangerous”, registration and insurance obligations should apply, together with the “absolute no” provisions and, if possible, a black box requirement. Under any circumstances, the concept should be thoroughly scrutinized.

References

Academic literature

Banteka, N. (2021). Artificially Intelligent Persons. Houston law review, 58 (3), 537–596.

Brundage, et al. (2018). The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, https://doi.org/10.17863/CAM.22520

Čerka, P., Grigiena, J. and Sirbikyta, G. (2015). Liability for damages caused by artificial intelligence. The computer law and security report, 31(3), 376–389.

Darling, K. (2015). Who’s Johnny? Anthropomorphic Framing in Human-Robot Interaction, Integration, and Policy. In: Lin, P., Bekey, G., Abney, K. and Jenkins, R. (eds) (2017). Robot Ethics 2.0. Oxford University Press, http://dx.doi.org/10.2139/ssrn.2588669

Dremliuga, R., Kuznetcov, P. and Mamychev, A. (2019). Criteria for Recognition of AI as a Legal Person. Journal of politics and law (Toronto), 12(3), p. 105–112. https://doi.org/10.5539/jpl.v12n3p105

Dyschkant, A. (2015). Legal personhood: how we are getting it wrong. University of Illinois law review, 2015(5), 2075–2109.

Foerst, A. (1999). Artificial sociability: from embodied AI toward new understandings of personhood. Technology in society, 21(4), 373–386.

Gunkel, D. J. and Wales, J. J. (2021). Debate: what is personhood in the age of AI? AI & society, 36(2), p.  473–486, https://doi.org/10.1007/s00146-020-01129-1

Koops, E. J., Hildebrandt, M. and Jaquet-Chiffelle, D. O. (2010). Bridging the accountability gap: Rights for new entities in the information society? Minnesota journal of law, science & technology, 11(2), 497–561.

Kurki, V. A. J. (2019). A Theory of Legal Personhood. First edn. Oxford: Oxford University Press.

Lau, P. L. (2019). The Extension of Legal Personhood in Artificial Intelligence. Revista de Bioética y Derecho, 46, 47–66.

Laukyte, M. (2021). The intelligent machine: a new metaphor through which to understand both corporations and AI. AI & society, 36(2), 445–456.

Lee, K. M., Jung, Y., Kim, J. and Kim, S. R. (2006). Are physically embodied social agents better than disembodied social agents?: The effects of physical embodiment, tactile interaction, and people‘s loneliness in human–robot interaction. International journal of human-computer studies, 64(10), p. 962–973, https://doi.org/10.1016/j.ijhcs.2006.05.002

Morse, S. J. (2004). Reason, results, and criminal responsibility. University of Illinois law review, 2004(2), 363–444.

Pagallo, U. (2018). Vital, Sophia, and Co. – The Quest for the Legal Personhood of Robots. Information, 9(9):230, p. 1–11, https://doi.org/10.3390/info9090230

Simmler, M. and Markwalder, N. (2019). Guilty Robots? – Rethinking the Nature of Culpability and Legal Personhood in an Age of Artificial Intelligence. Criminal law forum, 30(1), p. 1–31, https://doi.org/10.1007/s10609-018-9360-0

Solum, L.B. (1992). Legal personhood for artificial intelligences. North Carolina law review, 70(4), 1231–1287.

Somaya, D. and Varshney, L. (2018). Embodiment, Anthropomorphism, and Intellectual Property Rights for AI Creations. ACM, 278–283.

Electronic publications

Bertolini, A. (2020). Artificial Intelligence and Civil Liability. European Parliament [online]. Available at: https://www.europarl.europa.eu/RegData/etudes/STUD/2020/621926/IPOL_STU(2020)621926_EN.pdf?fbclid=IwAR1ItAEPWFSpC_VtSGdipx5XC3F-Ff_hJAuJW9h2J4DJQj-lmRgSOe56AhI [Accessed 12 Sep. 2021].

Smuha, N. (2020). Europe’s approach to AI governance: time for a vision. Friends of Europe [online]. Available at: https://www.friendsofeurope.org/insights/europes-approach-to-ai-governance-time-for-a-vision/ [Accessed 27 Aug. 2021].

Podcasts

Kantor, V. (2020). Kak obuchat datasaentistov, igraia v shliapu. Neopoznannyi Iskusstvennyi Intellekt, 3. Internet access: https://podcasts.apple.com/ru/podcast/виктор-кантор-как-обучать-датасаентистов-играя-в-шляпу/id1542538163?i=1000503077892 [Accessed 26 Aug. 2021].

Shavrina, T. (2020). Kak lingvisty delaiut iskusstvennyi intellekt. Neopoznannyi Iskusstvennyi Intellekt, 2. Internet access: https://podcasts.apple.com/ru/podcast/татьяна-шаврина-как-лингвисты-делают-искусственный-интеллект/id1542538163?i=1000502348244 [Accessed 26 Aug. 2021].

Skorinkin, D. and Starostin, A. (2020). Kak priiti k silnomu iskusstvennomu intellektu? Pervyi vypusk podkasta NII. Neopoznannyi Iskusstvennyi Intellekt, 1. Internet access: https://podcasts.apple.com/ru/podcast/как-прийти-к-сильному-искусственному-интеллекту-первый/id1542538163?i=1000501599523 [Accessed 26 Aug. 2021].

Legal Personhood for Artificial Intelligence: Pro, Contra, Abstain?

Kateryna Militsyna
(Taras Shevchenko National University of Kyiv)

Summary

Artificial intelligence (AI) is a technology that originated in the 1950s but remained in a “dormant” state until recently, when it began to scale at a rapid pace. By now, it has already presented many challenges to humanity. The legal personhood of AI is one of the solutions to the abov-mentioned challenges that are proposed today.

Before diving into the concept of legal personhood of AI, the article dwells on what artificial intelligence is. It points out that AI is not limited to physically embodied robots. In the absence of a universally accepted definition, the article is concerned with the criteria of AI, distinguishing narrow AI (ANI) from general AI (AGI). It reaffirms that today we are still at the stage of ANI.

Then the article analyses the arguments against legal personhood of AI and the options of such a legal personhood that exist. Finally, it presents our own vision of what should be taken into account if the concept of e-personhood (which may cover AI and similar technologies) prevails over other proposals. In our opinion, at present, such a personhood should exclude criminal law and be founded on a technology-specific approach. In the case of AI technologies, which after applying the approach will be considered as “dangerous”, registration and insurance obligations should apply, together with the “absolute no” provisions and, if possible, a black box requirement.

Dirbtinio intelekto juridinis asmuo: už, prieš, susilaikyti?

Kateryna Militsyna
(Kijevo nacionalinis Taraso Ševčenkos universitetas)

Santrauka

Dirbtinio intelekto technologija, atsiradusi praėjusio amžiaus šeštame dešimtmetyje, iki šiol išlikusi „neaktyvios“ būsenos, kai pradėjo sparčiai plėstis. Ligi šiolei ji žmonijai pateikė daug iššūkių. Dirbtinio intelekto juridinis asmuo yra vienas iš siūlomų iššūkių sprendimų.

Prieš gilinantis į dirbtinio intelekto juridinio asmens sampratą, straipsnyje aptariama, kas yra dirbtinis intelektas. Nurodoma, kad dirbtinis intelektas neapsiriboja fiziškai įkūnytais robotais. Nesant visuotinai vartojamo apibrėžimo, straipsnyje analizuojami dirbtinio intelekto kriterijai, siaurojo dirbtinio intelekto atskirtis nuo bendrojo dirbtinio intelekto. Tai dar kartą patvirtina, kad vis dar esame siaurojo dirbtinio intelekto stadijoje.

Toliau straipsnyje analizuojami argumentai už ir prieš dirbtinio intelekto juridinio asmens statuso įtvirtinimą. Galiausiai pateikiama autorės vizija, į ką reikėtų atsižvelgti, jei elektroninės asmenybės samprata (kuri gali apimti dirbtinį intelektą ir panašias technologijas) laimėtų prieš kitus pasiūlymus. Autorės nuomone, šiuo metu tokia asmenybė neturėtų būti įtraukta į baudžiamąją teisę ir būti pagrįsta technologiniu požiūriu. Dirbtinio intelekto technologijoms, kurios, pritaikius šį metodą, bus laikomos „pavojingomis“, turėtų būti taikomi registracijos ir draudimo įpareigojimai, taip pat „visiškai ne“ nuostatos ir, jei įmanoma, „juodosios dėžės“ reikalavimas.

Kateryna Militsyna is a PhD student at the Private International Law Chair of the Institute of International Relations of Taras Shevchenko National University of Kyiv. She has a keen interest in private international law, particularly IP Law with a focus on AI-related issues from both national and international perspectives.

Kateryna Militsyna yra Kijevo nacionalinio Taraso Ševčenkos universiteto Tarptautinių santykių instituto Tarptautinės privatinės teisės katedros doktorantė. Ji labai domisi tarptautine privatine teise, ypač intelektinės nuosavybės teise, daugiausia dėmesio skiria su dirbtiniu intelektu susijusiems klausimams tiek nacionaliniu, tiek tarptautiniu požiūriu.