The Use and Abuse of Trolley Cases

Author(s):  
Nikil Mukerji
Keyword(s):  
Author(s):  
Wulf Loh ◽  
Janina Loh

In this chapter, we give a brief overview of the traditional notion of responsibility and introduce a concept of distributed responsibility within a responsibility network of engineers, driver, and autonomous driving system. In order to evaluate this concept, we explore the notion of man–machine hybrid systems with regard to self-driving cars and conclude that the unit comprising the car and the operator/driver consists of such a hybrid system that can assume a shared responsibility different from the responsibility of other actors in the responsibility network. Discussing certain moral dilemma situations that are structured much like trolley cases, we deduce that as long as there is something like a driver in autonomous cars as part of the hybrid system, she will have to bear the responsibility for making the morally relevant decisions that are not covered by traffic rules.


Taking Life ◽  
2015 ◽  
pp. 53-78
Author(s):  
Torbjörn Tännsjö
Keyword(s):  

2012 ◽  
Vol 13 (1) ◽  
pp. 243-256 ◽  
Author(s):  
James O’Connor ◽  

The hypothetical scenarios generally known as trolley problems have become widespread in recent moral philosophy. They invariably require an agent to choose one of a strictly limited number of options, all of them bad. Although they don’t always involve trolleys / trams, and are used to make a wide variety of points, what makes it justified to speak of a distinctive “trolley method” is the characteristic assumption that the intuitive reactions that all these artificial situations elicit constitute an appropriate guide to real-life moral reasoning. I dispute this assumption by arguing that trolley cases inevitably constrain the supposed rescuers into behaving in ways that clearly deviate from psychologically healthy, and morally defensible, human behavior. Through this focus on a generally overlooked aspect of trolley theorizing – namely, the highly impoverished role invariably allotted to the would-be rescuer in these scenarios – I aim to challenge the complacent twin assumptions of advocates of the trolley method that this approach to moral reasoning has practical value, and is in any case innocuous. Neither assumption is true.


Author(s):  
Vanessa Schäffner

AbstractHow should driverless vehicles respond to situations of unavoidable personal harm? This paper takes up the case of self-driving cars as a prominent example of algorithmic moral decision-making, an emergent type of morality that is evolving at a high pace in a digitised business world. As its main contribution, it juxtaposes dilemma decision situations relating to ethical crash algorithms for autonomous cars to two edge cases: the case of manually driven cars facing real-life, mundane accidents, on the one hand, and the dilemmatic situation in theoretically constructed trolley cases, on the other. The paper identifies analogies and disanalogies between the three cases with regard to decision makers, decision design, and decision outcomes. The findings are discussed from the angle of three perspectives: aspects where analogies could be found, those where the case of self-driving cars has turned out to lie in between both edge cases, and those where it entirely departs from either edge case. As a main result, the paper argues that manual driving as well as trolley cases are suitable points of reference for the issue of designing ethical crash algorithms only to a limited extent. Instead, a fundamental epistemic and conceptual divergence of dilemma decision situations in the context of self-driving cars and the used edge cases is substantiated. Finally, the areas of specific need for regulation on the road to introducing autonomous cars are pointed out and related thoughts are sketched through the lens of the humanistic paradigm.


Res Publica ◽  
2021 ◽  
Author(s):  
Rob Lawlor

AbstractIn this paper, I will argue that automated vehicles should not swerve to avoid a person or vehicle in its path, unless they can do so without imposing risks onto others. I will argue that this is the conclusion that we should reach even if we start by assuming that we should divert the trolley in the standard trolley case (in which the trolley will hit and kill five people on the track, unless it is diverted onto a different track, where it will hit and kill just one person). In defence of this claim, I appeal to the distribution of moral and legal responsibilities, highlighting the importance of safe spaces, and arguing in favour of constraints on what can be done to minimise casualties. My arguments draw on the methodology associated with the trolley problem. As such, this paper also defends this methodology, highlighting a number of ways in which authors misunderstand and misrepresent the trolley problem. For example, the ‘trolley problem’ is not the ‘name given by philosophers to classic examples of unavoidable crash scenarios, historically involving runaway trolleys’, as Millar suggests, and trolley cases should not be compared with ‘model building in the (social) sciences’, as Gogoll and Müller suggest. Trolley cases have more in common with lab experiments than model building, and the problem referred to in the trolley problem is not the problem of deciding what to do in any one case. Rather, it refers to the problem of explaining what appear to be conflicting intuitions when we consider two cases together. The problem, for example, could be: how do we justify the claim that automated vehicles should not swerve even if we accept the claim that we should divert the trolley in an apparently similar trolley case?


2016 ◽  
Vol 32 (2) ◽  
pp. 1-17
Author(s):  
F. M. Kamm

Abstract:This essay considers complications introduced by the Trolley Problem to the discussion of whether and when harming some for the sake of helping others would be unjustified. It first examines Guido Pincione’s arguments for the conclusion that the permissibility of a bystander turning a runaway trolley from killing five people toward killing one other person instead may undermine one moral argument for political libertarianism and against redistributive taxation, namely that we may not harm some people in order to help others to a greater degree. It then considers both the bearing on Pincione’s argument of recent objections to the permissibility of turning the trolley, as well as the soundness of the objections. Finally, the essay considers the relevance of trolley cases for developing a theory of aggression, insofar as aggression is the unjustified use of force that is either foreseen or intended.


Author(s):  
Claudia Brändle ◽  
Michael W. Schmidt

AbstractIn this paper, we argue that solutions to normative challenges associated with autonomous driving, such as real-world trolley cases or distributions of risk in mundane driving situations, face the problem of reasonable pluralism: Reasonable pluralism refers to the fact that there exists a plurality of reasonable yet incompatible comprehensive moral doctrines (religions, philosophies, worldviews) within liberal democracies. The corresponding problem is that a politically acceptable solution cannot refer to only one of these comprehensive doctrines. Yet a politically adequate solution to the normative challenges of autonomous driving need not come at the expense of an ethical solution, if it is based on moral beliefs that are (1) shared in an overlapping consensus and (2) systematized through public reason. Therefore, we argue that a Rawlsian justificatory framework is able to adequately address the normative challenges of autonomous driving and elaborate on how such a framework might be employed for this purpose.


Utilitas ◽  
2021 ◽  
pp. 1-16
Author(s):  
Dustin Locke
Keyword(s):  

Abstract While it is permissible to switch the trolley in the classic Switch case, it is not permissible to push the stranger in the classic Footbridge (aka, ‘Push’) case. But what may we do in cases that offer both a ‘switch-like’ option and a ‘push-like’ option? Surprisingly, we may choose the push-like option, provided that it has better consequences than the switch-like option. We arrive at this conclusion by taking ourselves seriously – not just as agents who might redirect threats – but as threats who might be redirected by agents.


2011 ◽  
Vol 41 (3) ◽  
pp. 391-422 ◽  
Author(s):  
Janet Levin

IntroductionIt is standard practice in philosophical inquiry to test a general thesis (of the form ‘F iff G’ or ‘F only if G’) by attempting to construct a counterexample to it. If we can imagine or conceive of an F that isn't a G, then we have evidence that there could be an F that isn't a G — and thus evidence against the thesis in question; if not, then the thesis is (at least temporarily) secure. Or so it is standardly claimed.But there is increasing skepticism about how seriously to take what we can imagine or conceive as evidence for (or against) a priori philosophical theses, given the many historical examples of now-questionable theses that once seemed impossible to doubt — and also the recent experimental research suggesting that our verdicts on Gettier cases, trolley cases, and the scenarios depicted in other familiar thought-experiments may be affected by cultural, situational, and other adventitious factors.


Sign in / Sign up

Export Citation Format

Share Document