moral machines
Recently Published Documents


TOTAL DOCUMENTS

42
(FIVE YEARS 26)

H-INDEX

3
(FIVE YEARS 1)

AI & Society ◽  
2021 ◽  
Author(s):  
Jakob Stenseke

AbstractVirtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To achieve this goal, we critically explore the possibilities and challenges for virtue ethics from a computational perspective. Drawing on previous conceptual and technical work, we outline a version of artificial virtue based on moral functionalism, connectionist bottom–up learning, and eudaimonic reward. We then describe how core features of the outlined theory can be interpreted in terms of functionality, which in turn informs the design of components necessary for virtuous cognition. Finally, we present a comprehensive framework for the technical development of artificial virtuous agents and discuss how they can be implemented in moral environments.


Conatus ◽  
2021 ◽  
Vol 6 (1) ◽  
pp. 177
Author(s):  
Michael Anderson ◽  
Susan Leigh Anderson ◽  
Alkis Gounaris ◽  
George Kosteletos

At the turn of the 21st century, Susan Leigh Anderson and Michael Anderson conceived and introduced the Machine Ethics research program, that aimed to highlight the requirements under which autonomous artificial intelligence (AI) systems could demonstrate ethical behavior guided by moral values, and at the same time to show that these values, as well as ethics in general, can be representable and computable. Today, the interaction between humans and AI entities is already part of our everyday lives; in the near future it is expected to play a key role in scientific research, medical practice, public administration, education and other fields of civic life. In view of this, the debate over the ethical behavior of machines is more crucial than ever and the search for answers, directions and regulations is imperative at an academic, institutional as well as at a technical level. Our discussion with the two inspirers and originators of Machine Ethics highlights the epistemological, metaphysical and ethical questions arising by this project, as well as the realistic and pragmatic demands that dominate artificial intelligence and robotics research programs. Most of all, however, it sheds light upon the contribution of Susan and Michael Anderson regarding the introduction and undertaking of a main objective related to the creation of ethical autonomous agents, that will not be based on the “imperfect” patterns of human behavior, or on preloaded hierarchical laws and human-centric values.


2021 ◽  
pp. 290-305
Author(s):  
David R. Lawrence ◽  
John Harris

Debates over moral machines are often guilty of making wide assumptions about the nature of future autonomous entities, and frequently bypass the distinction between ‘agents’ and ‘actors’ to the detriment of their conclusions. The scope and limits of moral status are fundamentally linked to this distinction. We position non-Homo sapiens great apes as members of a particular moral status clade, which are treated in a similar fashion to that proposed for so-called ‘moral machines’. The principles by which we ultimately decide to treat great apes, and whether or not we decide to act upon our responsibilities to them as moral agents, are likely to be the same principles we use to decide our responsibilities to moral AI in the future.


2021 ◽  
pp. 203-216
Author(s):  
Nicholas G. Evans

While the majority of neuroscience research promises novel therapies for treating dementia and post-traumatic stress disorder, among others, a lesser-known branch of neuroscientific research informs the construction of artificial intelligence inspired by human neurophysiology. For those concerned with the normative implications of autonomous weapons systems (AWS), however, a tension arises between the primary attraction of AWS, their theoretic capacity to make better decisions in armed conflict, and the relatively low-hanging fruit of modeling machine intelligence on the very thing that causes humans to make (relatively) bad decisions—the human brain. This chapter examines human cognition as a model for machine intelligence, and some of its implications for AWS development. It first outlines recent neuroscience developments as drivers for advances in artificial intelligence. This chapter then expands on a key distinction for the ethics of AWS: poor normative decisions that are a function of poor judgments given a certain set of inputs, and poor normative decisions that are a function of poor sets of inputs. It argues that given that there are cases in the second category of decisions in which we judge humans to have acted wrongly, we should likewise judge AWS platforms. Further, while an AWS may in principle outperform humans in the former, it is an open question of design whether they can outperform humans in the latter. Finally, this chapter then discusses what this means for the design and control of, and ultimately liability for AWS behavior, and sources of inspiration for the alternate design of AWS platforms.


Author(s):  
Jill Anne Morris

This chapter re-introduces the idea of roller coasters as moral machines and morality mechanisms, as they were designed to rid mankind of immoral entertainment, and traces their ability to spread American culture via themed entertainment from World's Fairs to Disneyland and beyond. It features an analysis of two Chinese themed rides, one of which has been developed with American cultural constructs and one of which begins to develop a new form of Chinese historical theme park. Through these examples, it suggests the potential for themed amusements to spread not just American morality and culture, but to provide sites of cultural exchange.


Author(s):  
Martin Cunneen

In this paper, I make two controversial claims. First, autonomous vehicles are de facto moral machines by building their decision architecture on necessary risk quantification and second, that in doing so they are inadequate moral machines. Moreover, this moral inadequacy presents significant risks to society. The paper engages with some of the key concepts in Autonomous Vehicle decisionality literature to reframe the problem of moral machine for Autonomous Vehicles. This is defended as a necessary step to access the meta questions that underlie Autonomous vehicles as machines making high value decisions regarding human welfare and life.


Author(s):  
Tomi Kokkonen

There are two connected questions about moral agency and robots: How can we ensure that robots behave in accordance with relevant ethical considerations? Is it possible to have genuinely moral machines? I will approach these questions from an evolutionary perspective and argue for the importance of a middle-range perspective on the morality of machines: we should neither be restricted to the present-day perspective of current ethical concerns nor to the far future theoretical issues concerning the possibility of genuine morality. Instead, we should reflect on what it would mean to create protomoral machines. The evolution of human morality may help in this.


Author(s):  
Oliver Bendel

The discipline of machine ethics examines, designs, and produces moral machines. The artificial morality is usually pre-programmed by a manufacturer or developer. However, another approach is the more flexible morality menu (MOME). With this, owners or users replicate their own moral preferences onto a machine. A team at the FHNW implemented a MOME for MOBO (a chatbot) in 2019/2020. In this article, the author introduces the idea of the MOME, presents the MOBO-MOME project, and discusses advantages and disadvantages of such an approach. It turns out that a morality menu could be a valuable extension for certain moral machines.


Sign in / Sign up

Export Citation Format

Share Document