How Should Autonomous Vehicles Make Moral Decisions? Machine Ethics, Artificial Driving Intelligence, and Crash Algorithms

2019 ◽  
Vol 11 (1) ◽  
pp. 9
2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Darius-Aurel Frank ◽  
Polymeros Chrysochou ◽  
Panagiotis Mitkidis ◽  
Dan Ariely

Abstract The development of artificial intelligence has led researchers to study the ethical principles that should guide machine behavior. The challenge in building machine morality based on people’s moral decisions, however, is accounting for the biases in human moral decision-making. In seven studies, this paper investigates how people’s personal perspectives and decision-making modes affect their decisions in the moral dilemmas faced by autonomous vehicles. Moreover, it determines the variations in people’s moral decisions that can be attributed to the situational factors of the dilemmas. The reported studies demonstrate that people’s moral decisions, regardless of the presented dilemma, are biased by their decision-making mode and personal perspective. Under intuitive moral decisions, participants shift more towards a deontological doctrine by sacrificing the passenger instead of the pedestrian. In addition, once the personal perspective is made salient participants preserve the lives of that perspective, i.e. the passenger shifts towards sacrificing the pedestrian, and vice versa. These biases in people’s moral decisions underline the social challenge in the design of a universal moral code for autonomous vehicles. We discuss the implications of our findings and provide directions for future research.


PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0261673
Author(s):  
Maike M. Mayer ◽  
Raoul Bell ◽  
Axel Buchner

Upon the introduction of autonomous vehicles into daily traffic, it becomes increasingly likely that autonomous vehicles become involved in accident scenarios in which decisions have to be made about how to distribute harm among involved parties. In four experiments, participants made moral decisions from the perspective of a passenger, a pedestrian, or an observer. The results show that the preferred action of an autonomous vehicle strongly depends on perspective. Participants’ judgments reflect self-protective tendencies even when utilitarian motives clearly favor one of the available options. However, with an increasing number of lives at stake, utilitarian preferences increased. In a fifth experiment, we tested whether these results were tainted by social desirability but this was not the case. Overall, the results confirm that strong differences exist among passengers, pedestrians, and observers about the preferred course of action in critical incidents. It is therefore important that the actions of autonomous vehicles are not only oriented towards the needs of their passengers, but also take the interests of other road users into account. Even though utilitarian motives cannot fully reconcile the conflicting interests of passengers and pedestrians, there seem to be some moral preferences that a majority of the participants agree upon regardless of their perspective, including the utilitarian preference to save several other lives over one’s own.


AI and Ethics ◽  
2021 ◽  
Author(s):  
Christian Herzog né Hoffmann

AbstractIn this article, I will advocate caution against a formalization of ethics by showing that it may produce and perpetuate unjustified power imbalances, disadvantaging those without a proper command of the formalisms, and those not in a position to decide on the formalisms’ use. My focus rests mostly on ethics formalized for the purpose of implementing ethical evaluations in computer science–artificial intelligence, in particular—but partly also extends to the project of applying mathematical rigor to moral argumentation with no direct intention to automate moral deliberation. Formal ethics of the latter kind can, however, also be seen as a facilitator of automated ethical evaluation. I will argue that either form of formal ethics presents an obstacle to inclusive and fair processes for arriving at a society-wide moral consensus. This impediment to inclusive moral deliberation may prevent a significant portion of society from acquiring a deeper understanding of moral issues. However, I will defend the view that such understanding supports genuine and sustained moral progress. From this, it follows that formal ethics is not per se supportive of moral progress. I will illustrate these arguments by practical examples of manifest asymmetric relationships of power primarily from the domain of autonomous vehicles as well as on more visionary concepts, such as artificial moral advisors. As a result, I will show that in these particular proposed use-cases of formal ethics, machine ethics risks to run contrary to their proponents’ proclaimed promises of increasing the rigor of moral deliberation and even improving human morality on the whole. Instead, I will propose that inclusive discourse about automating ethical evaluations, e.g., in autonomous vehicles, should be conducted with unrelenting transparency about the limitations of implementations of ethics. As an outlook, I will briefly discuss uses formal ethics that are more likely to avoid discrepancies between the ideal of inclusion and the challenge from power asymmetries.Please check and confirm that the authors and their respective affiliations have been correctly identified and amend if necessary.I confirm.Author names: Please confirm if the author names are presented accurately and in the correct sequence (given name, middle name/initial, family name). I confirm. Kindly check and confirm the country name for the affiliation [1] is correct.I confirm.


Author(s):  
Joseph G. Walters ◽  
Xiaolin Meng ◽  
Chang Xu ◽  
Hao (Julia) Jing ◽  
Stuart Marsh
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document