scholarly journals Group Agency and Artificial Intelligence

Author(s):  
Christian List

AbstractThe aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights and a moral status? I will tentatively defend the (increasingly widely held) view that, under certain conditions, artificial intelligent systems, like corporate entities, might qualify as responsible moral agents and as holders of limited rights and legal personhood. I will further suggest that regulators should permit the use of autonomous artificial systems in high-stakes settings only if they are engineered to function as moral (not just intentional) agents and/or there is some liability-transfer arrangement in place. I will finally raise the possibility that if artificial systems ever became phenomenally conscious, there might be a case for extending a stronger moral status to them, but argue that, as of now, this remains very hypothetical.

2021 ◽  
Vol 8 ◽  
Author(s):  
Eric Martínez ◽  
Christoph Winter

To what extent, if any, should the law protect sentient artificial intelligence (that is, AI that can feel pleasure or pain)? Here we surveyed United States adults (n = 1,061) on their views regarding granting 1) general legal protection, 2) legal personhood, and 3) standing to bring forth a lawsuit, with respect to sentient AI and eight other groups: humans in the jurisdiction, humans outside the jurisdiction, corporations, unions, non-human animals, the environment, humans living in the near future, and humans living in the far future. Roughly one-third of participants endorsed granting personhood and standing to sentient AI (assuming its existence) in at least some cases, the lowest of any group surveyed on, and rated the desired level of protection for sentient AI as lower than all groups other than corporations. We further investigated and observed political differences in responses; liberals were more likely to endorse legal protection and personhood for sentient AI than conservatives. Taken together, these results suggest that laypeople are not by-and-large in favor of granting legal protection to AI, and that the ordinary conception of legal status, similar to codified legal doctrine, is not based on a mere capacity to feel pleasure and pain. At the same time, the observed political differences suggest that previous literature regarding political differences in empathy and moral circle expansion apply to artificially intelligent systems and extend partially, though not entirely, to legal consideration, as well.


‘Social implications' generally refers to anything that affects an individual, a community, and wider society. The social implications of artificial intelligence (AI) is an immensely important field of study since AI technology will steadily continue to permeate other technologies and, inevitably, our society as a whole. Many of the social implications of this technological process are non-obvious and surprising. We should ask ourselves, What type of society do we want and what role will AI play to influence and shape lives? Will people simply become consumers served by intelligent systems that respond to our every whim? Are we reaching a tipping point between convenience and dependency? How will AI affect social issues relating to housing, finance, privacy, poverty, and so on? Do we want a society where machines are supplementing (or augmenting) humans or perhaps even substituting humans? It is important to be as clear as possible about likely social implications of AI if it truly helps benefit individuals and society.


AI & Society ◽  
2021 ◽  
Author(s):  
Nello Cristianini ◽  
Teresa Scantamburlo ◽  
James Ladyman

AbstractSocial machines are systems formed by material and human elements interacting in a structured way. The use of digital platforms as mediators allows large numbers of humans to participate in such machines, which have interconnected AI and human components operating as a single system capable of highly sophisticated behaviour. Under certain conditions, such systems can be understood as autonomous goal-driven agents. Many popular online platforms can be regarded as instances of this class of agent. We argue that autonomous social machines provide a new paradigm for the design of intelligent systems, marking a new phase in AI. After describing the characteristics of goal-driven social machines, we discuss the consequences of their adoption, for the practice of artificial intelligence as well as for its regulation.


2021 ◽  
pp. 11-25
Author(s):  
Daniel W. Tigard

AbstractTechnological innovations in healthcare, perhaps now more than ever, are posing decisive opportunities for improvements in diagnostics, treatment, and overall quality of life. The use of artificial intelligence and big data processing, in particular, stands to revolutionize healthcare systems as we once knew them. But what effect do these technologies have on human agency and moral responsibility in healthcare? How can patients, practitioners, and the general public best respond to potential obscurities in responsibility? In this paper, I investigate the social and ethical challenges arising with newfound medical technologies, specifically the ways in which artificially intelligent systems may be threatening moral responsibility in the delivery of healthcare. I argue that if our ability to locate responsibility becomes threatened, we are left with a difficult choice of trade-offs. In short, it might seem that we should exercise extreme caution or even restraint in our use of state-of-the-art systems, but thereby lose out on such benefits as improved quality of care. Alternatively, we could embrace novel healthcare technologies but in doing so we might need to loosen our commitment to locating moral responsibility when patients come to harm; for even if harms are fewer – say, as a result of data-driven diagnostics – it may be unclear who or what is responsible when things go wrong. What is clear, at least, is that the shift toward artificial intelligence and big data calls for significant revisions in expectations on how, if at all, we might locate notions of responsibility in emerging models of healthcare.


2021 ◽  
Vol 1 (1) ◽  
pp. 29-36
Author(s):  
Igor Milinkovic

Abstract The rapid development of artificial intelligence (AI) systems raises dilemmas regarding their moral and legal status. Can artificial intelligence possess moral status (significance)? And under what conditions? Can one speak of the dignity of artificial intelligence as the basis of its moral status? According to some authors, if there are entities who have the capacities on which the dignity of human beings is based, they would also possess intrinsic dignity. If dignity is not an exclusive feature of human beings, such status also could be recognised by artificial intelligence entities. The first part of the paper deals with the problem of moral status of artificial intelligence and the conditions that must be fulfilled for such a status to be recognised. A precondition for the existence of moral status of artificial intelligence is its ability to make autonomous decisions. This part of the paper considers whether developing autonomous AI is justified, or, as some authors suggest, the creation of AI agents capable of autonomous action should be avoided. The recognition of the moral status of artificial intelligence would reflect on its legal status. The second part of the paper deals with the question of justifiability of ascribing legal personhood to the AI agents. Under what conditions would recognition of legal personhood by the artificial intelligence be justified and should its legal subjectivity be recognised in full scope or only partially (by ascribing to the AI agents a “halfway-status,” as some authors suggest)? The current state of the legal regulation of artificial intelligence will be observed as well.


2021 ◽  
Vol 30 (3) ◽  
pp. 435-447
Author(s):  
Daniel W. Tigard

AbstractOur ability to locate moral responsibility is often thought to be a necessary condition for conducting morally permissible medical practice, engaging in a just war, and other high-stakes endeavors. Yet, with increasing reliance upon artificially intelligent systems, we may be facing a widening responsibility gap, which, some argue, cannot be bridged by traditional concepts of responsibility. How then, if at all, can we make use of crucial emerging technologies? According to Colin Allen and Wendell Wallach, the advent of so-called ‘artificial moral agents’ (AMAs) is inevitable. Still, this notion may seem to push back the problem, leaving those who have an interest in developing autonomous technology with a dilemma. We may need to scale-back our efforts at deploying AMAs (or at least maintain human oversight); otherwise, we must rapidly and drastically update our moral and legal norms in a way that ensures responsibility for potentially avoidable harms. This paper invokes contemporary accounts of responsibility in order to show how artificially intelligent systems might be held responsible. Although many theorists are concerned enough to develop artificial conceptions of agency or to exploit our present inability to regulate valuable innovations, the proposal here highlights the importance of—and outlines a plausible foundation for—a workable notion of artificial moral responsibility.


AI & Society ◽  
2020 ◽  
Author(s):  
Jaana Parviainen ◽  
Mark Coeckelbergh

AbstractA humanoid robot named ‘Sophia’ has sparked controversy since it has been given citizenship and has done media performances all over the world. The company that made the robot, Hanson Robotics, has touted Sophia as the future of artificial intelligence (AI). Robot scientists and philosophers have been more pessimistic about its capabilities, describing Sophia as a sophisticated puppet or chatbot. Looking behind the rhetoric about Sophia’s citizenship and intelligence and going beyond recent discussions on the moral status or legal personhood of AI robots, we analyse the performativity of Sophia from the perspective of what we call ‘political choreography’: drawing on phenomenological approaches to performance-oriented philosophy of technology. This paper proposes to interpret and discuss the world tour of Sophia as a political choreography that boosts the rise of the social robot market, rather than a statement about robot citizenship or artificial intelligence. We argue that the media performances of the Sophia robot were choreographed to advance specific political interests. We illustrate our philosophical discussion with media material of the Sophia performance, which helps us to explore the mechanisms through which the media spectacle functions hand in hand with advancing the economic interests of technology industries and their governmental promotors. Using a phenomenological approach and attending to the movement of robots, we also criticize the notion of ‘embodied intelligence’ used in the context of social robotics and AI. In this way, we put the discussions about the robot’s rights or citizenship in the context of AI politics and economics.


Author(s):  
M. G. Koliada ◽  
T. I. Bugayova

The article discusses the history of the development of the problem of using artificial intelligence systems in education and pedagogic. Two directions of its development are shown: “Computational Pedagogic” and “Educational Data Mining”, in which poorly studied aspects of the internal mechanisms of functioning of artificial intelligence systems in this field of activity are revealed. The main task is a problem of interface of a kernel of the system with blocks of pedagogical and thematic databases, as well as with the blocks of pedagogical diagnostics of a student and a teacher. The role of the pedagogical diagnosis as evident reflection of the complex influence of factors and reasons is shown. It provides the intelligent system with operative and reliable information on how various reasons intertwine in the interaction, which of them are dangerous at present, where recession of characteristics of efficiency is planned. All components of the teaching and educational system are subject to diagnosis; without it, it is impossible to own any pedagogical situation optimum. The means in obtaining information about students, as well as the “mechanisms” of work of intelligent systems based on innovative ideas of advanced pedagogical experience in diagnostics of the professionalism of a teacher, are considered. Ways of realization of skill of the teacher on the basis of the ideas developed by the American scientists are shown. Among them, the approaches of researchers D. Rajonz and U. Bronfenbrenner who put at the forefront the teacher’s attitude towards students, their views, intellectual and emotional characteristics are allocated. An assessment of the teacher’s work according to N. Flanders’s system, in the form of the so-called “The Interaction Analysis”, through the mechanism of fixing such elements as: the verbal behavior of the teacher, events at the lesson and their sequence is also proposed. A system for assessing the professionalism of a teacher according to B. O. Smith and M. O. Meux is examined — through the study of the logic of teaching, using logical operations at the lesson. Samples of forms of external communication of the intellectual system with the learning environment are given. It is indicated that the conclusion of the found productive solutions can have the most acceptable and comfortable form both for students and for the teacher in the form of three approaches. The first shows that artificial intelligence in this area can be represented in the form of robotized being in the shape of a person; the second indicates that it is enough to confine oneself only to specially organized input-output systems for targeted transmission of effective methodological recommendations and instructions to both students and teachers; the third demonstrates that life will force one to come up with completely new hybrid forms of interaction between both sides in the form of interactive educational environments, to some extent resembling the educational spaces of virtual reality.


Author(s):  
Michael Moehler

This book develops a novel multilevel social contract theory that, in contrast to existing theories in the liberal tradition, does not merely assume a restricted form of reasonable moral pluralism, but is tailored to the conditions of deeply morally pluralistic societies that may be populated by liberal moral agents, nonliberal moral agents, and, according to the traditional understanding of morality, nonmoral agents alike. To develop this theory, the book draws on the history of the social contract tradition, especially the work of Hobbes, Hume, Kant, Rawls, and Gauthier, as well as on the work of some of the critics of this tradition, such as Sen and Gaus. The two-level contractarian theory holds that morality in its best contractarian version for the conditions of deeply morally pluralistic societies entails Humean, Hobbesian, and Kantian moral features. The theory defines the minimal behavioral restrictions that are necessary to ensure, compared to violent conflict resolution, mutually beneficial peaceful long-term cooperation in deeply morally pluralistic societies. The theory minimizes the problem of compliance by maximally respecting the interests of all members of society. Despite its ideal nature, the theory is, in principle, applicable to the real world and, for the conditions described, most promising for securing mutually beneficial peaceful long-term cooperation in a world in which a fully just society, due to moral diversity, is unattainable. If Rawls’ intention was to carry the traditional social contract argument to a higher level of abstraction, then the two-level contractarian theory brings it back down to earth.


Sign in / Sign up

Export Citation Format

Share Document