Introduction: Moral Machines and Political Animals

2020 ◽  
pp. 1-18
Keyword(s):  
2020 ◽  
pp. 243-252
Author(s):  
◽  
◽  
◽  
Keyword(s):  

Author(s):  
Ben van Lier

Technology is responsible for major systemic changes within the global financial sector. This sector has already developed into a comprehensive network of mutually connected people and computers. Algorithms play a crucial role within this network. An algorithm is in essence merely a set of instructions developed by one or more people with the intention of having these instructions performed by a machine such as a computer in order to realize an ideal result. As part of a development in which we as human beings have ever higher expectations of algorithms and these algorithms become more autonomous in their actions, we cannot avoid including possibilities in these algorithms that enable ethical or moral considerations. To develop this ethical or moral consideration, we need a kind of ethical framework that can be used for constructing these algorithms. With the development of such a framework we can start to think about what we as human beings consider to be a moral action executed by algorithms that support actions and decisions of interconnected and self-organizing machines. This chapter explores an ethical framework for interconnected and self-organizing moral machines.


Author(s):  
Wendell Wallach ◽  
Shannon Vallor

Implementing sensitivity to norms, laws, and human values in computational systems has transitioned from philosophical reflection to an actual engineering challenge. The “value alignment” approach to dealing with superintelligent AIs tends to employ computationally friendly concepts such as utility functions, system goals, agent preferences, and value optimizers, which, this chapter argues, do not have intrinsic ethical significance. This chapter considers what may be lost in the excision of intrinsically ethical concepts from the project of engineering moral machines. It argues that human-level AI and superintelligent systems can be assured to be safe and beneficial only if they embody something like virtue or moral character and that virtue embodiment is a more appropriate long-term goal for AI safety research than value alignment.


Author(s):  
Martin Cunneen

In this paper, I make two controversial claims. First, autonomous vehicles are de facto moral machines by building their decision architecture on necessary risk quantification and second, that in doing so they are inadequate moral machines. Moreover, this moral inadequacy presents significant risks to society. The paper engages with some of the key concepts in Autonomous Vehicle decisionality literature to reframe the problem of moral machine for Autonomous Vehicles. This is defended as a necessary step to access the meta questions that underlie Autonomous vehicles as machines making high value decisions regarding human welfare and life.


Sign in / Sign up

Export Citation Format

Share Document