Algorithm for Ethical Decision Making at Times of Accidents for Autonomous Vehicles

Author(s):  
Md. Azharul Islam ◽  
Shawkh Ibne Rashid
Author(s):  
Gustavo E. Juarez ◽  
Marta Yelamos Caceres ◽  
Franco D. Menendez ◽  
Cristian Lafuente ◽  
Leonardo Franco ◽  
...  

Author(s):  
Nelson De Moura ◽  
Raja Chatila ◽  
Katherine Evans ◽  
Stephane Chauvier ◽  
Ebru Dogan

2020 ◽  
pp. 1-24
Author(s):  
Tobey K. Scharding

This article addresses a dilemma about autonomous vehicles: how to respond to trade-off scenarios in which all possible responses involve the loss of life but there is a choice about whose life or lives are lost. I consider four options: kill fewer people, protect passengers, equal concern for survival, and recognize everyone’s interests. I solve this dilemma via what I call the new trolley problem, which seeks a rationale for the intuition that it is unethical to kill a smaller number of people to avoid killing a greater number of people based on numbers alone. I argue that killing a smaller number of people to avoid killing a greater number of people based on numbers alone is unethical because it disrespects the humanity of the individuals in the smaller-numbered group. I defend the recognize-everyone’s-interests algorithm, which will probably kill fewer people but will not do so based on numbers alone.


2020 ◽  
pp. 089443932090650
Author(s):  
Hubert Etienne

This article discusses the dangers of the Moral Machine (MM) experiment, alerting against both its uses for normative ends and the whole approach it is built upon to address ethical issues. It explores additional methodological limits of the experiment on top of those already identified by its authors; exhibits the dangers of computational moral systems for modern democracies, such as the “voting-based system” recently developed out of the MM’s data; and provides reasons why ethical decision-making fundamentally excludes computational social choice methods.


2020 ◽  
Vol 26 (6) ◽  
pp. 3285-3312
Author(s):  
Katherine Evans ◽  
Nelson de Moura ◽  
Stéphane Chauvier ◽  
Raja Chatila ◽  
Ebru Dogan

AbstractThe ethics of autonomous vehicles (AV) has received a great amount of attention in recent years, specifically in regard to their decisional policies in accident situations in which human harm is a likely consequence. Starting from the assumption that human harm is unavoidable, many authors have developed differing accounts of what morality requires in these situations. In this article, a strategy for AV decision-making is proposed, the Ethical Valence Theory, which paints AV decision-making as a type of claim mitigation: different road users hold different moral claims on the vehicle’s behavior, and the vehicle must mitigate these claims as it makes decisions about its environment. Using the context of autonomous vehicles, the harm produced by an action and the uncertainties connected to it are quantified and accounted for through deliberation, resulting in an ethical implementation coherent with reality. The goal of this approach is not to define how moral theory requires vehicles to behave, but rather to provide a computational approach that is flexible enough to accommodate a number of ‘moral positions’ concerning what morality demands and what road users may expect, offering an evaluation tool for the social acceptability of an autonomous vehicle’s ethical decision making.


2015 ◽  
Vol 3 (4) ◽  
pp. 359-364 ◽  
Author(s):  
Karin L. Price ◽  
Margaret E. Lee ◽  
Gia A. Washington ◽  
Mary L. Brandt

Sign in / Sign up

Export Citation Format

Share Document