scholarly journals A neo-aristotelian perspective on the need for artificial moral agents (AMAs)

AI & Society ◽  
2021 ◽  
Author(s):  
Alejo José G. Sison ◽  
Dulce M. Redín

AbstractWe examine Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) critique of the need for Artificial Moral Agents (AMAs) and its rebuttal by Formosa and Ryan (JAMA 10.1007/s00146-020-01089-6, 2020) set against a neo-Aristotelian ethical background. Neither Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) essay nor Formosa and Ryan’s (JAMA 10.1007/s00146-020-01089-6, 2020) is explicitly framed within the teachings of a specific ethical school. The former appeals to the lack of “both empirical and intuitive support” (Van Wynsberghe and Robbins 2019, p. 721) for AMAs, and the latter opts for “argumentative breadth over depth”, meaning to provide “the essential groundwork for making an all things considered judgment regarding the moral case for building AMAs” (Formosa and Ryan 2019, pp. 1–2). Although this strategy may benefit their acceptability, it may also detract from their ethical rootedness, coherence, and persuasiveness, characteristics often associated with consolidated ethical traditions. Neo-Aristotelian ethics, backed by a distinctive philosophical anthropology and worldview, is summoned to fill this gap as a standard to test these two opposing claims. It provides a substantive account of moral agency through the theory of voluntary action; it explains how voluntary action is tied to intelligent and autonomous human life; and it distinguishes machine operations from voluntary actions through the categories of poiesis and praxis respectively. This standpoint reveals that while Van Wynsberghe and Robbins may be right in rejecting the need for AMAs, there are deeper, more fundamental reasons. In addition, despite disagreeing with Formosa and Ryan’s defense of AMAs, their call for a more nuanced and context-dependent approach, similar to neo-Aristotelian practical wisdom, becomes expedient.

Author(s):  
John P. Sullins

This chapter will argue that artificial agents created or synthesized by technologies such as artificial life (ALife), artificial intelligence (AI), and in robotics present unique challenges to the traditional notion of moral agency and that any successful technoethics must seriously consider that these artificial agents may indeed be artificial moral agents (AMA), worthy of moral concern. This purpose will be realized by briefly describing a taxonomy of the artificial agents that these technologies are capable of producing. I will then describe how these artificial entities conflict with our standard notions of moral agency. I argue that traditional notions of moral agency are too strict even in the case of recognizably human agents and then expand the notion of moral agency such that it can sensibly include artificial agents.


AI & Society ◽  
2021 ◽  
Author(s):  
Jeffrey White

AbstractRyan Tonkens (2009) has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents (AMAs) satisfy a Kantian inspired recipe—"rational" and "free"—while also satisfying perceived prerogatives of machine ethicists to facilitate the creation of AMAs that are perfectly and not merely reliably ethical. This series of papers meets this challenge by landscaping traditional moral theory in resolution of a comprehensive account of moral agency. The first paper established the challenge and set out autonomy in Aristotelian terms. The present paper interprets Kantian moral theory on the basis of the preceding introduction, argues contra Tonkens that an engineer does not violate the categorical imperative in creating Kantian AMAs, and proposes that a Kantian AMA is not only a possible goal for Machine ethics research, but a necessary one.


2012 ◽  
pp. 1767-1783
Author(s):  
John P. Sullins

This chapter will argue that artificial agents created or synthesized by technologies such as artificial life (ALife), artificial intelligence (AI), and in robotics present unique challenges to the traditional notion of moral agency and that any successful technoethics must seriously consider that these artificial agents may indeed be artificial moral agents (AMA), worthy of moral concern. This purpose will be realized by briefly describing a taxonomy of the artificial agents that these technologies are capable of producing. I will then describe how these artificial entities conflict with our standard notions of moral agency. I argue that traditional notions of moral agency are too strict even in the case of recognizably human agents and then expand the notion of moral agency such that it can sensibly include artificial agents.


2021 ◽  
Vol 1 (1) ◽  
pp. 44-49
Author(s):  
Agnieszka Lekka-Kowalik

Abstract AIs’ presence in and influence on human life is growing. AIs are seen more and more as autonomously acting agents, which creates a challenge to build ethics into their design. This paper defends the thesis that we need to equip AI with artificial conscience to make them capable of wise judgements. An argument is built in three steps. First, the concept of decision is presented, and second, the Asilomar Principles for AI development are analysed. It is then shown that to meet those principles AI needs the capability of passing moral judgements on right and wrong, of following that judgement, and of passing a meta-judgement on the correctness of a given moral judgement, which is a role of conscience. In classical philosophy, the ability to discover right and wrong and to stick to one's judgement of what is right action in given circumstances is called practical wisdom. The conclusion is that we should equip AI with artificial wisdom. Some problems stemming from ascribing moral agency to AIs are also indicated.


Author(s):  
Philip J. Ivanhoe

This chapter elaborates on the connections between oneness, moral agency, and spontaneity by distinguishing between two general kinds of spontaneity: untutored spontaneity, which is characteristic of traditions such as Daoism, and cultivated spontaneity, representative of traditions such as Confucianism. This discussion intersects with oneness on the matter of “metaphysical comfort,” the sense of oneness, harmony, and happiness that one experiences when acting or reacting spontaneously, on either the untutored or cultivated model. Daoists argued quite plausibly that this experience goes hand in hand with certain kinds of untutored spontaneity, but an important objective of the chapter is to show that even cultivated spontaneity can provide the same comfort. The chapter makes the case that both forms of spontaneity are familiar, though largely unrecognized, in all forms of human life and that the descriptions provided, inspired by early Chinese philosophy, offer important theoretical resources for philosophy today.


2021 ◽  
pp. 1-26
Author(s):  
Alan D. Morrison ◽  
Rita Mota ◽  
William J. Wilhelm

We present a second-personal account of corporate moral agency. This approach is in contrast to the first-personal approach adopted in much of the existing literature, which concentrates on the corporation’s ability to identify moral reasons for itself. Our account treats relationships and communications as the fundamental building blocks of moral agency. The second-personal account rests on a framework developed by Darwall. Its central requirement is that corporations be capable of recognizing the authority relations that they have with other moral agents. We discuss the relevance of corporate affect, corporate communications, and corporate culture to the second-personal account. The second-personal account yields a new way to specify first-personal criteria for moral agency, and it generates fresh insights into the reasons those criteria matter. In addition, a second-personal analysis implies that moral agency is partly a matter of policy, and it provides a fresh perspective on corporate punishment.


Rhetorik ◽  
2018 ◽  
Vol 37 (1) ◽  
pp. 68-93
Author(s):  
Markus H. Woerner ◽  
Ricca Edmondson

Abstract Using an understanding of rhetoric as a method of communicative reasoning capable of providing grounds for conviction in those to whom it is addressed, this article argues that the formation of medical diagnoses shares a structure with Aristotle’s account of the rhetorical syllogism (the enthymeme). Here the argument itself (logos), together with characterological elements (ethos) and emotions (pathos), are welded together so that each affects the operation of the others. In the initial three sections of the paper, we contend, first, that diagnoses, as verdictive performatives, differ from scientific claims in being irreducibly personal and context-dependent; secondly, that they fit the structure of voluntary action as analysed by Aristotle and Aquinas; thirdly, that as practical syllogisms they differ from theoretical syllogisms, for example in taking effect in action, being ›addressed‹, and being intrinsically embedded in wider contexts of medical communication and practices. In the remaining sections we apply this account to textual evidence about diagnosis, drawing on work by the brain surgeon Henry Marsh. A rhetorical analysis of his observations on the formation of diagnostic opinions in situilluminates how moral, social and emotional features are fused with the cognitive aspects of medical judgement, making or marring how diagnoses and treatment are enacted. In other words, a philosophical- rhetorical account of diagnosis can help us to appreciate how medical diagnosis takes effect. We briefly conclude with some implications of our work for how diagnostic processes could in practice be better supported.


2018 ◽  
Vol 9 (1) ◽  
pp. 44-61
Author(s):  
André Schmiljun

With the development of autonomous robots, one day probably capable of speaking, thinking and learning, self-reflecting, sharing emotions, in fact, with the raise of robots becoming artificial moral agents (AMAs) robot scientists like Abney, Veruggio and Petersen are already optimistic that sooner or later we need to call those robots “people” or rather “Artificial People” (AP). The paper rejects this forecast, due to its argument based on three metaphysical conflicting assumptions. Firstly, it is the idea that it is possible to precisely define persons and apply the definition to robots or use it to differentiate human beings from robots. Further, the argument of APs favors a position of non-reductive physicalism (second assumption) and materialism (third assumption), finally producing weird convictions about future robotics. Therefore, I will suggest to follow Christine Korsgaard’s defence of animals as ends in themselves with moral standing. I will show that her argument can be transmitted to robots, too, at least to robots which are capable of pursuing their own good (even if they are not rational). Korsgaard’s interpretation of Kant delivers an option that allows us to leave out complicated metaphysical notions like “person” or “subject” in the debate, without denying robots’ status as agents.


2018 ◽  
Vol 5 (1) ◽  
Author(s):  
Laura D'Olimpio ◽  
Andrew Peterson

Following neo-Aristotelians Alasdair MacIntyre and Martha Nussbaum, we claim that humans are story-telling animals who learn from the stories of diverse others. Moral agents use rational emotions, such as compassion, which is our focus here, to imaginatively reconstruct others’ thoughts, feelings and goals. In turn, this imaginative reconstruction plays a crucial role in deliberating and discerning how to act. A body of literature has developed in support of the role narrative artworks (i.e. novels and films) can play in allowing us the opportunity to engage imaginatively and sympathetically with diverse characters and scenarios in a safe protected space that is created by the fictional world. By practising what Nussbaum calls a ‘loving attitude’, her version of ethical attention, we can form virtuous habits that lead to phronesis (practical wisdom). In this paper, and taking compassion as an illustrative focus, we examine the ways that students’ moral education might usefully develop from engaging with narrative artworks through Philosophy for Children (P4C), where philosophy is a praxis, conducted in a classroom setting using a Community of Inquiry (CoI). We argue that narrative artworks provide useful stimulus material to engage students, generate student questions, and motivate philosophical dialogue and the formation of good habits, which, in turn, supports the argument for philosophy to be taught in schools.


Sign in / Sign up

Export Citation Format

Share Document