NARRATIVE CAPACITY AND MORAL RESPONSIBILITY

2019 ◽  
Vol 36 (01) ◽  
pp. 93-113
Author(s):  
Meghan Griffith

Abstract:My main aim in this essay is to argue that “narrative capacity” is a genuine feature of our mental lives and a skill that enables us to become full-fledged morally responsible agents. I approach the issue from the standpoint of reasons-responsiveness. Reasons-responsiveness theories center on the idea that moral responsibility requires sufficient sensitivity to reasons. I argue that our capacity to understand and tell stories has an important role to play in this sensitivity. Without such skill we would be cut off from the full range of reasons to which moral agents need access and/or we would be deficient in the ability to weigh the reasons that we recognize. After arguing for the relevance of narrative skill, I argue that understanding the connection between reasons-sensitivity and narrative confers additional benefits. It illuminates important psychological structures (sometimes said to be missing from reasons-responsive accounts) and helps to explain some cases of diminished blame.

2006 ◽  
Vol 36 (3) ◽  
pp. 427-447 ◽  
Author(s):  
Neil Levy

Whatever its implications for the other features of human agency at its best — for moral responsibility, reasons-responsiveness, self-realization, flourishing, and so on—addiction is universally recognized as impairing autonomy. But philosophers have frequently misunderstood the nature of addiction, and therefore have not adequately explained the manner in which it impairs autonomy. Once we recognize that addiction is not incompatible with choice or volition, it becomes clear that none of the Standard accounts of autonomy can satisfactorily explain the way in which it undermines fully autonomous agency. In order to understand to what extent and in what ways the addicted are autonomy-impaired, we need to understand autonomy as consisting, essentially, in the exercise of the capacity for extended agency. It is because addiction undermines extended agency, so that addicts are not able to integrate their lives and pursue a Single conception of the good, that it impairs autonomy.


Author(s):  
Paul Spicker

Moral collectivism is the idea that social groups can be moral agents; that they have rights and responsibilities, that groups as well as individuals can take moral action, that the morality of their actions can sensibly be assessed in those terms, and that moral responsibility cannot simply be reduced to the actions of individuals within them. This position is not opposed to individualism; it is complementary.


Author(s):  
Linda Johansson

It is often argued that a robot cannot be held morally responsible for its actions. The author suggests that one should use the same criteria for robots as for humans, regarding the ascription of moral responsibility. When deciding whether humans are moral agents one should look at their behaviour and listen to the reasons they give for their judgments in order to determine that they understood the situation properly. The author suggests that this should be done for robots as well. In this regard, if a robot passes a moral version of the Turing Test—a Moral Turing Test (MTT) we should hold the robot morally responsible for its actions. This is supported by the impossibility of deciding who actually has (semantic or only syntactic) understanding of a moral situation, and by two examples: the transferring of a human mind into a computer, and aliens who actually are robots.


Author(s):  
Toni Erskine

This chapter takes seriously the prevalent assumption that the responsibility to protect populations from mass atrocity represents a moral imperative. It highlights tensions between how R2P is articulated and arguments for its legitimate implementation. The chapter maintains that identifying a range of ‘moral agents of protection’ and ‘supplementary responsibilities to protect’ is fundamental to any attempt to realize R2P. It offers an account of the loci of moral responsibility implicit in prominent articulations of R2P that both supports and extends this argument. Taken to its logical conclusion, this account demands that hitherto unacknowledged moral agents of protection step in when the host state and the UN are unwilling or unable to act. The chapter examines which bodies can discharge this residual responsibility to protect and proposes that, in certain urgent circumstances, institutional agents have a shared responsibility to come together and act in concert, even without UN Security Council authorization.


2016 ◽  
Vol 10 (2) ◽  
pp. 38-59
Author(s):  
Albert W. Musschenga

The central question of this article is, Are animals morally responsible for what they do? Answering this question requires a careful, step-by-step argument. In sections 1 and 2, I explain what morality is, and that having a morality means following moral rules or norms. In sections 3 and 4, I argue that some animals show not just regularities in their social behaviour, but can be rightly said to follow social norms. But are the norms they follow also moral norms? In section 5, I contend, referring to the work of Shaun Nichols, that the basic moral competences or capacities are already present in nonhuman primates. Following moral rules or norms is more than just acting in accordance to these norms; it requires being motivated by moral rules. I explain, in section 6, referring to Mark Rowlands, that being capable of moral motivation does not require agency; being a moral subject is sufficient. Contrary to moral agents, moral subjects are not responsible for their behaviour. Stating that there are important similarities between animal moral behaviour and human, unconscious, automatic, habitual behaviour, I examine in section 7 whether humans are responsible for their habitual moral behaviour, and if they are, what then the grounds are for denying that moral animals are responsible for their behaviour. The answer is that humans are responsible for their habitual behaviour if they have the capacity for deliberate intervention. Although animals are capable of intervention in their habitual behaviour, they are not capable of deliberate intervention.


2010 ◽  
Vol 1 (4) ◽  
pp. 65-73 ◽  
Author(s):  
Linda Johansson

It is often argued that a robot cannot be held morally responsible for its actions. The author suggests that one should use the same criteria for robots as for humans, regarding the ascription of moral responsibility. When deciding whether humans are moral agents one should look at their behaviour and listen to the reasons they give for their judgments in order to determine that they understood the situation properly. The author suggests that this should be done for robots as well. In this regard, if a robot passes a moral version of the Turing Test—a Moral Turing Test (MTT) we should hold the robot morally responsible for its actions. This is supported by the impossibility of deciding who actually has (semantic or only syntactic) understanding of a moral situation, and by two examples: the transferring of a human mind into a computer, and aliens who actually are robots.


2019 ◽  
Vol 36 (01) ◽  
pp. 234-248 ◽  
Author(s):  
Chandra Sripada

Abstract:Reasons-responsiveness theories of moral responsibility are currently among the most popular. Here, I present the fallibility paradox, a novel challenge to these views. The paradox involves an agent who is performing a somewhat demanding psychological task across an extended sequence of trials and who is deeply committed to doing her very best at this task. Her action-issuing psychological processes are outstandingly reliable, so she meets the criterion of being reasons-responsive on every single trial. But she is human after all, so it is inevitable that she will make rare errors. The reasons-responsiveness view, it is claimed, is forced to reach a highly counterintuitive conclusion: she is morally responsible for these rare errors, even though making rare errors is something she is powerless to prevent. I review various replies that a reasons-responsiveness theorist might offer, arguing that none of these replies adequately addresses the challenge.


2021 ◽  
Vol 30 (3) ◽  
pp. 435-447
Author(s):  
Daniel W. Tigard

AbstractOur ability to locate moral responsibility is often thought to be a necessary condition for conducting morally permissible medical practice, engaging in a just war, and other high-stakes endeavors. Yet, with increasing reliance upon artificially intelligent systems, we may be facing a widening responsibility gap, which, some argue, cannot be bridged by traditional concepts of responsibility. How then, if at all, can we make use of crucial emerging technologies? According to Colin Allen and Wendell Wallach, the advent of so-called ‘artificial moral agents’ (AMAs) is inevitable. Still, this notion may seem to push back the problem, leaving those who have an interest in developing autonomous technology with a dilemma. We may need to scale-back our efforts at deploying AMAs (or at least maintain human oversight); otherwise, we must rapidly and drastically update our moral and legal norms in a way that ensures responsibility for potentially avoidable harms. This paper invokes contemporary accounts of responsibility in order to show how artificially intelligent systems might be held responsible. Although many theorists are concerned enough to develop artificial conceptions of agency or to exploit our present inability to regulate valuable innovations, the proposal here highlights the importance of—and outlines a plausible foundation for—a workable notion of artificial moral responsibility.


Sign in / Sign up

Export Citation Format

Share Document