scholarly journals Perspectives about artificial moral agents

AI and Ethics ◽  
2021 ◽  
Author(s):  
Andreia Martinho ◽  
Adam Poulsen ◽  
Maarten Kroesen ◽  
Caspar Chorus

AbstractThe pursuit of AMAs is complicated. Disputes about the development, design, moral agency, and future projections for these systems have been reported in the literature. This empirical study explores these controversial matters by surveying (AI) Ethics scholars with the aim of establishing a more coherent and informed debate. Using Q-methodology, we show the wide breadth of viewpoints and approaches to artificial morality. Five main perspectives about AMAs emerged from our data and were subsequently interpreted and discussed: (i) Machine Ethics: The Way Forward; (ii) Ethical Verification: Safe and Sufficient; (iii) Morally Uncertain Machines: Human Values to Avoid Moral Dystopia; (iv) Human Exceptionalism: Machines Cannot Moralize; and (v) Machine Objectivism: Machines as Superior Moral Agents. A potential source of these differing perspectives is the failure of Machine Ethics to be widely observed or explored as an applied ethic and more than a futuristic end. Our study helps improve the foundations for an informed debate about AMAs, where contrasting views and agreements are disclosed and appreciated. Such debate is crucial to realize an interdisciplinary approach to artificial morality, which allows us to gain insights into morality while also engaging practitioners.

AI & Society ◽  
2021 ◽  
Author(s):  
Jeffrey White

AbstractRyan Tonkens (2009) has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents (AMAs) satisfy a Kantian inspired recipe—"rational" and "free"—while also satisfying perceived prerogatives of machine ethicists to facilitate the creation of AMAs that are perfectly and not merely reliably ethical. This series of papers meets this challenge by landscaping traditional moral theory in resolution of a comprehensive account of moral agency. The first paper established the challenge and set out autonomy in Aristotelian terms. The present paper interprets Kantian moral theory on the basis of the preceding introduction, argues contra Tonkens that an engineer does not violate the categorical imperative in creating Kantian AMAs, and proposes that a Kantian AMA is not only a possible goal for Machine ethics research, but a necessary one.


2021 ◽  
pp. 1-26
Author(s):  
Alan D. Morrison ◽  
Rita Mota ◽  
William J. Wilhelm

We present a second-personal account of corporate moral agency. This approach is in contrast to the first-personal approach adopted in much of the existing literature, which concentrates on the corporation’s ability to identify moral reasons for itself. Our account treats relationships and communications as the fundamental building blocks of moral agency. The second-personal account rests on a framework developed by Darwall. Its central requirement is that corporations be capable of recognizing the authority relations that they have with other moral agents. We discuss the relevance of corporate affect, corporate communications, and corporate culture to the second-personal account. The second-personal account yields a new way to specify first-personal criteria for moral agency, and it generates fresh insights into the reasons those criteria matter. In addition, a second-personal analysis implies that moral agency is partly a matter of policy, and it provides a fresh perspective on corporate punishment.


Author(s):  
Vinit Haksar

Moral agents are those agents expected to meet the demands of morality. Not all agents are moral agents. Young children and animals, being capable of performing actions, may be agents in the way that stones, plants and cars are not. But though they are agents they are not automatically considered moral agents. For a moral agent must also be capable of conforming to at least some of the demands of morality. This requirement can be interpreted in different ways. On the weakest interpretation it will suffice if the agent has the capacity to conform to some of the external requirements of morality. So if certain agents can obey moral laws such as ‘Murder is wrong’ or ‘Stealing is wrong’, then they are moral agents, even if they respond only to prudential reasons such as fear of punishment and even if they are incapable of acting for the sake of moral considerations. According to the strong version, the Kantian version, it is also essential that the agents should have the capacity to rise above their feelings and passions and act for the sake of the moral law. There is also a position in between which claims that it will suffice if the agent can perform the relevant act out of altruistic impulses. Other suggested conditions of moral agency are that agents should have: an enduring self with free will and an inner life; understanding of the relevant facts as well as moral understanding; and moral sentiments, such as capacity for remorse and concern for others. Philosophers often disagree about which of these and other conditions are vital; the term moral agency is used with different degrees of stringency depending upon what one regards as its qualifying conditions. The Kantian sense is the most stringent. Since there are different senses of moral agency, answers to questions like ‘Are collectives moral agents?’ depend upon which sense is being used. From the Kantian standpoint, agents such as psychopaths, rational egoists, collectives and robots are at best only quasi-moral, for they do not fulfil some of the essential conditions of moral agency.


Author(s):  
Hanna Meretoja

Chapter 4 tests hermeneutic narrative ethics as a lens for analyzing the (ab)uses of narrative for life in Julia Franck’s Die Mittagsfrau (2007, The Blind Side of the Heart), exploring how narrative practices expand and diminish the space of possibilities in which moral agents act and suffer. It demonstrates how narrative “in-betweens” bind people together, through dialogic narrative imagination, and can promote exclusion that amounts to annihilation. It addresses the necessity of storytelling for survival, and a transgenerational culture of silence that leads to the repetition of harmful emotional-behavioral patterns. It explores the continuum from being able to tell one’s own stories to violently imposed narrative identities and suggests that moral agency requires a minimum narrative sense of oneself as a being worthy and capable of goodness. The chapter argues that the ethical evaluation of narrative practices must be contextual—sensitive to how they function in particular sociohistorical worlds.


2020 ◽  
Vol 31 (2) ◽  
pp. 74-87 ◽  
Author(s):  
Keng Siau ◽  
Weiyu Wang

Artificial intelligence (AI)-based technology has achieved many great things, such as facial recognition, medical diagnosis, and self-driving cars. AI promises enormous benefits for economic growth, social development, as well as human well-being and safety improvement. However, the low-level of explainability, data biases, data security, data privacy, and ethical problems of AI-based technology pose significant risks for users, developers, humanity, and societies. As AI advances, one critical issue is how to address the ethical and moral challenges associated with AI. Even though the concept of “machine ethics” was proposed around 2006, AI ethics is still in the infancy stage. AI ethics is the field related to the study of ethical issues in AI. To address AI ethics, one needs to consider the ethics of AI and how to build ethical AI. Ethics of AI studies the ethical principles, rules, guidelines, policies, and regulations that are related to AI. Ethical AI is an AI that performs and behaves ethically. One must recognize and understand the potential ethical and moral issues that may be caused by AI to formulate the necessary ethical principles, rules, guidelines, policies, and regulations for AI (i.e., Ethics of AI). With the appropriate ethics of AI, one can then build AI that exhibits ethical behavior (i.e., Ethical AI). This paper will discuss AI ethics by looking at the ethics of AI and ethical AI. What are the perceived ethical and moral issues with AI? What are the general and common ethical principles, rules, guidelines, policies, and regulations that can resolve or at least attenuate these ethical and moral issues with AI? What are some of the necessary features and characteristics of an ethical AI? How to adhere to the ethics of AI to build ethical AI?


2015 ◽  
Vol 1 (1) ◽  
pp. 5-20 ◽  
Author(s):  
Patricia H WERHANE

AbstractIn 2011 the United Nations (UN) published the ‘Guiding Principles on Business and Human Rights: Implementing the United Nations “Protect, Respect, and Remedy” Framework’ (Guiding Principles). The Guiding Principles specify that for-profit corporations have responsibilities to respect human rights. Do these responsibilities entail that corporations, too, have basic rights? The contention that corporations are moral persons is problematic because it confers moral status to an organization similar to that conferred to a human agent. I shall argue that corporations are not moral persons. But as collective bodies created, operated, and perpetuated by individual human moral agents, one can ascribe to corporations secondary moral agency as organizations. This ascription, I conclude, makes sense of the normative business responsibilities outlined in the Guiding Principles without committing one to the view that corporations are full moral persons.


Conatus ◽  
2020 ◽  
Vol 5 (1) ◽  
pp. 27
Author(s):  
Gerard Elfstrom

Adam and Eve’s theft marks the beginning of the human career as moral agents.   This article will examine the assumptions underlying the notion of moral agency from the perspective of three unremarkable human beings who found themselves in situations of moral difficulty.  The article will conclude that these three people could not have acted differently than they did.  It will conclude that it is unreasonable to assume that ordinary human beings will inevitably possess the resources to address difficult moral decisions.


Author(s):  
Silviya Serafimova

Abstract Moral implications of the decision-making process based on algorithms require special attention within the field of machine ethics. Specifically, research focuses on clarifying why even if one assumes the existence of well-working ethical intelligent agents in epistemic terms, it does not necessarily mean that they meet the requirements of autonomous moral agents, such as human beings. For the purposes of exemplifying some of the difficulties in arguing for implicit and explicit ethical agents in Moor’s sense, three first-order normative theories in the field of machine ethics are put to test. Those are Powers’ prospect for a Kantian machine, Anderson and Anderson’s reinterpretation of act utilitarianism and Howard and Muntean’s prospect for a moral machine based on a virtue ethical approach. By comparing and contrasting the three first-order normative theories, and by clarifying the gist of the differences between the processes of calculation and moral estimation, the possibility for building what—one might call strong “moral” AI scenarios—is questioned. The possibility of weak “moral” AI scenarios is likewise discussed critically.


Author(s):  
John P. Sullins

This chapter will argue that artificial agents created or synthesized by technologies such as artificial life (ALife), artificial intelligence (AI), and in robotics present unique challenges to the traditional notion of moral agency and that any successful technoethics must seriously consider that these artificial agents may indeed be artificial moral agents (AMA), worthy of moral concern. This purpose will be realized by briefly describing a taxonomy of the artificial agents that these technologies are capable of producing. I will then describe how these artificial entities conflict with our standard notions of moral agency. I argue that traditional notions of moral agency are too strict even in the case of recognizably human agents and then expand the notion of moral agency such that it can sensibly include artificial agents.


Sign in / Sign up

Export Citation Format

Share Document