Moral collectivism

Author(s):  
Paul Spicker

Moral collectivism is the idea that social groups can be moral agents; that they have rights and responsibilities, that groups as well as individuals can take moral action, that the morality of their actions can sensibly be assessed in those terms, and that moral responsibility cannot simply be reduced to the actions of individuals within them. This position is not opposed to individualism; it is complementary.

Author(s):  
Brian Leiter

Moral psychology, for purposes of this volume, encompasses issues in metaethics, philosophy of mind, and philosophy of action, including questions concerning the objectivity of morality, the relationship between moral judgment and emotion, the nature of the emotions, free will, and moral responsibility, and the structure of the mind as that is relevant to the possibility of moral action and judgment. Nietzsche’s “naturalism” is introduced and explained, and certain confusions about its meaning are addressed. An overview of the volume follows


Author(s):  
Linda Johansson

It is often argued that a robot cannot be held morally responsible for its actions. The author suggests that one should use the same criteria for robots as for humans, regarding the ascription of moral responsibility. When deciding whether humans are moral agents one should look at their behaviour and listen to the reasons they give for their judgments in order to determine that they understood the situation properly. The author suggests that this should be done for robots as well. In this regard, if a robot passes a moral version of the Turing Test—a Moral Turing Test (MTT) we should hold the robot morally responsible for its actions. This is supported by the impossibility of deciding who actually has (semantic or only syntactic) understanding of a moral situation, and by two examples: the transferring of a human mind into a computer, and aliens who actually are robots.


Author(s):  
Toni Erskine

This chapter takes seriously the prevalent assumption that the responsibility to protect populations from mass atrocity represents a moral imperative. It highlights tensions between how R2P is articulated and arguments for its legitimate implementation. The chapter maintains that identifying a range of ‘moral agents of protection’ and ‘supplementary responsibilities to protect’ is fundamental to any attempt to realize R2P. It offers an account of the loci of moral responsibility implicit in prominent articulations of R2P that both supports and extends this argument. Taken to its logical conclusion, this account demands that hitherto unacknowledged moral agents of protection step in when the host state and the UN are unwilling or unable to act. The chapter examines which bodies can discharge this residual responsibility to protect and proposes that, in certain urgent circumstances, institutional agents have a shared responsibility to come together and act in concert, even without UN Security Council authorization.


2011 ◽  
Vol 16 (2) ◽  
pp. 283-308 ◽  
Author(s):  
Ido Geiger

AbstractKant's conception of moral agency is often charged with attributing no role to feelings. I suggest that respect is the effective force driving moral action. I then argue that four additional types of rational feelings are necessary conditions of moral agency: (1) The affective inner life of moral agents deliberating how to act and reflecting on their deeds is rich and complex (conscience). To act morally we must turn our affective moral perception towards the ends of moral action: (2) the welfare of others (love of others); and (3) our own moral being (self-respect). (4) Feelings shape our particular moral acts (moral feeling). I tentatively suggest that the diversity of moral feelings might be as great as the range of our duties.


2016 ◽  
Vol 10 (2) ◽  
pp. 38-59
Author(s):  
Albert W. Musschenga

The central question of this article is, Are animals morally responsible for what they do? Answering this question requires a careful, step-by-step argument. In sections 1 and 2, I explain what morality is, and that having a morality means following moral rules or norms. In sections 3 and 4, I argue that some animals show not just regularities in their social behaviour, but can be rightly said to follow social norms. But are the norms they follow also moral norms? In section 5, I contend, referring to the work of Shaun Nichols, that the basic moral competences or capacities are already present in nonhuman primates. Following moral rules or norms is more than just acting in accordance to these norms; it requires being motivated by moral rules. I explain, in section 6, referring to Mark Rowlands, that being capable of moral motivation does not require agency; being a moral subject is sufficient. Contrary to moral agents, moral subjects are not responsible for their behaviour. Stating that there are important similarities between animal moral behaviour and human, unconscious, automatic, habitual behaviour, I examine in section 7 whether humans are responsible for their habitual moral behaviour, and if they are, what then the grounds are for denying that moral animals are responsible for their behaviour. The answer is that humans are responsible for their habitual behaviour if they have the capacity for deliberate intervention. Although animals are capable of intervention in their habitual behaviour, they are not capable of deliberate intervention.


2010 ◽  
Vol 1 (4) ◽  
pp. 65-73 ◽  
Author(s):  
Linda Johansson

It is often argued that a robot cannot be held morally responsible for its actions. The author suggests that one should use the same criteria for robots as for humans, regarding the ascription of moral responsibility. When deciding whether humans are moral agents one should look at their behaviour and listen to the reasons they give for their judgments in order to determine that they understood the situation properly. The author suggests that this should be done for robots as well. In this regard, if a robot passes a moral version of the Turing Test—a Moral Turing Test (MTT) we should hold the robot morally responsible for its actions. This is supported by the impossibility of deciding who actually has (semantic or only syntactic) understanding of a moral situation, and by two examples: the transferring of a human mind into a computer, and aliens who actually are robots.


2021 ◽  
Vol 30 (3) ◽  
pp. 435-447
Author(s):  
Daniel W. Tigard

AbstractOur ability to locate moral responsibility is often thought to be a necessary condition for conducting morally permissible medical practice, engaging in a just war, and other high-stakes endeavors. Yet, with increasing reliance upon artificially intelligent systems, we may be facing a widening responsibility gap, which, some argue, cannot be bridged by traditional concepts of responsibility. How then, if at all, can we make use of crucial emerging technologies? According to Colin Allen and Wendell Wallach, the advent of so-called ‘artificial moral agents’ (AMAs) is inevitable. Still, this notion may seem to push back the problem, leaving those who have an interest in developing autonomous technology with a dilemma. We may need to scale-back our efforts at deploying AMAs (or at least maintain human oversight); otherwise, we must rapidly and drastically update our moral and legal norms in a way that ensures responsibility for potentially avoidable harms. This paper invokes contemporary accounts of responsibility in order to show how artificially intelligent systems might be held responsible. Although many theorists are concerned enough to develop artificial conceptions of agency or to exploit our present inability to regulate valuable innovations, the proposal here highlights the importance of—and outlines a plausible foundation for—a workable notion of artificial moral responsibility.


2019 ◽  
Author(s):  
Ana P. Gantman ◽  
Anni Sternisko ◽  
Peter M. Gollwitzer ◽  
Gabriele Oettingen ◽  
Jay Joseph Van Bavel

Moral and immoral actions often involve multiple individuals who play different roles in bringing about the outcome. For example, one agent may deliberate and decide what to do while another may plan and implement that decision. We suggest that the Mindset Theory of Action Phases provides a useful lens through which to understand these cases and the implications that these different roles, which correspond to different mindsets, have for judgments of moral responsibility. In Experiment 1, participants learned about a disastrous oil spill in which one company made decisions about a faulty oil rig, and another installed that rig. Participants judged the company who made decisions as more responsible than the company who implemented them. In Experiment 2 and a direct replication, we tested whether people judge implementers to be morally responsible at all. We examined a known asymmetry in blame and praise. Moral agents received blame for actions that resulted in a bad outcome but not praise for the same action that resulted in a good outcome. We found this asymmetry for deciders but not implementers, an indication that implementers were judged through a moral lens to a lesser extent than deciders. Implications for allocating moral responsibility across multiple agents are discussed.


2019 ◽  
Vol 4 (1) ◽  
pp. 45
Author(s):  
Tyler Kibbey

Descriptivism is a methodologically efficacious framework in the discipline of linguistics. However, it categorically fails to explicitly account for the moral responsibilities of linguists, as moral agents. In so doing, descriptivism has been used as a justification for indifference to instances and systems of linguistic violence, among other moral shortcomings. Specifically, many guidelines for descriptive ethics stipulate that a linguist “do no harm” but do not necessarily require the linguist to prevent harm or mitigate systems of violence. In this paper, I delineate an ethical framework, transcriptivism, which is distinct from research ethics and covers the line of philosophical inquiry related to questions of the moral agency of linguists and their moral responsibility. The potential for this new framework is demonstrated through a case study of conflicting Tennessee language ideologies regarding gender-neutral pronoun usage as well as an analysis of misgendering as an act of linguistic violence.


2018 ◽  
Vol 41 ◽  
Author(s):  
Stefanie Hechler ◽  
Thomas Kessler

AbstractThis commentary extends Doris's approach of agency by highlighting the importance of responsibility attributions by observers. We argue that (a) social groups determine which standards are relevant and which actors are responsible, (b) consensus about these attributions may correct individual defeaters, and (c) the attribution of moral responsibility reveals agency of observers and may foster the actors' agency.


Sign in / Sign up

Export Citation Format

Share Document