The Functional Morality of Robots

Author(s):  
Linda Johansson

It is often argued that a robot cannot be held morally responsible for its actions. The author suggests that one should use the same criteria for robots as for humans, regarding the ascription of moral responsibility. When deciding whether humans are moral agents one should look at their behaviour and listen to the reasons they give for their judgments in order to determine that they understood the situation properly. The author suggests that this should be done for robots as well. In this regard, if a robot passes a moral version of the Turing Test—a Moral Turing Test (MTT) we should hold the robot morally responsible for its actions. This is supported by the impossibility of deciding who actually has (semantic or only syntactic) understanding of a moral situation, and by two examples: the transferring of a human mind into a computer, and aliens who actually are robots.

2010 ◽  
Vol 1 (4) ◽  
pp. 65-73 ◽  
Author(s):  
Linda Johansson

It is often argued that a robot cannot be held morally responsible for its actions. The author suggests that one should use the same criteria for robots as for humans, regarding the ascription of moral responsibility. When deciding whether humans are moral agents one should look at their behaviour and listen to the reasons they give for their judgments in order to determine that they understood the situation properly. The author suggests that this should be done for robots as well. In this regard, if a robot passes a moral version of the Turing Test—a Moral Turing Test (MTT) we should hold the robot morally responsible for its actions. This is supported by the impossibility of deciding who actually has (semantic or only syntactic) understanding of a moral situation, and by two examples: the transferring of a human mind into a computer, and aliens who actually are robots.


Author(s):  
Paul Spicker

Moral collectivism is the idea that social groups can be moral agents; that they have rights and responsibilities, that groups as well as individuals can take moral action, that the morality of their actions can sensibly be assessed in those terms, and that moral responsibility cannot simply be reduced to the actions of individuals within them. This position is not opposed to individualism; it is complementary.


Author(s):  
Margaret A. Boden

Suppose that future AGI systems equalled human performance. Would they have real intelligence, real under-standing, real creativity? Would they have selves, moral standing, free choice? Would they be conscious? And without consciousness, could they have any of those other properties? ‘But is it intelligence, really?’ considers these philosophical questions, suggesting some answers that are more reasonable than others. It looks at concepts such as the Turing Test; the many problems of consciousness; the studies of AI-inspired philosophers Paul Churchland, Daniel Dennett, and Aaron Sloman; virtual machines and the mind–body problem, and moral responsibility. It concludes that no one knows, for sure, whether an AGI could really be intelligent.


2019 ◽  
Author(s):  
Juan Antonio Lloret Egea

“AI will be such a program which in an arbitrary world will cope not worse than a human” (Dobrev 2004, 2); “Artificial intelli-gence is the enterprise of constructing a symbol sys-tem that can reliably pass the Turing test” (Ginsberg 2012, 9); See Figure 1.1 Russell and Norvig (1995 page 5). “Artificial intelli-gence is a field of com-puter science concerne dwith the computational understanding of what is commonly called intelli-gent behavior and with the creation of artifacts that exhibit such behav-ior. This definition may e examined more closely by considering the field from three points of view: computational psychology (the goal of which is to understand human intelli-gent behvaior by creating computer programs that behave in the same way that people do), computa-tional philosopy (the goal of which is to form a com-putational understanding of human-level intelligent behavior, without being resticted to the algorithms and data structures that the human mind actually does use), and machine intelligence (the goal of which is to expand the fronteir of what we know how to program” (Reilly 2004, 40-41).


Author(s):  
Toni Erskine

This chapter takes seriously the prevalent assumption that the responsibility to protect populations from mass atrocity represents a moral imperative. It highlights tensions between how R2P is articulated and arguments for its legitimate implementation. The chapter maintains that identifying a range of ‘moral agents of protection’ and ‘supplementary responsibilities to protect’ is fundamental to any attempt to realize R2P. It offers an account of the loci of moral responsibility implicit in prominent articulations of R2P that both supports and extends this argument. Taken to its logical conclusion, this account demands that hitherto unacknowledged moral agents of protection step in when the host state and the UN are unwilling or unable to act. The chapter examines which bodies can discharge this residual responsibility to protect and proposes that, in certain urgent circumstances, institutional agents have a shared responsibility to come together and act in concert, even without UN Security Council authorization.


2016 ◽  
Vol 10 (2) ◽  
pp. 38-59
Author(s):  
Albert W. Musschenga

The central question of this article is, Are animals morally responsible for what they do? Answering this question requires a careful, step-by-step argument. In sections 1 and 2, I explain what morality is, and that having a morality means following moral rules or norms. In sections 3 and 4, I argue that some animals show not just regularities in their social behaviour, but can be rightly said to follow social norms. But are the norms they follow also moral norms? In section 5, I contend, referring to the work of Shaun Nichols, that the basic moral competences or capacities are already present in nonhuman primates. Following moral rules or norms is more than just acting in accordance to these norms; it requires being motivated by moral rules. I explain, in section 6, referring to Mark Rowlands, that being capable of moral motivation does not require agency; being a moral subject is sufficient. Contrary to moral agents, moral subjects are not responsible for their behaviour. Stating that there are important similarities between animal moral behaviour and human, unconscious, automatic, habitual behaviour, I examine in section 7 whether humans are responsible for their habitual moral behaviour, and if they are, what then the grounds are for denying that moral animals are responsible for their behaviour. The answer is that humans are responsible for their habitual behaviour if they have the capacity for deliberate intervention. Although animals are capable of intervention in their habitual behaviour, they are not capable of deliberate intervention.


2021 ◽  
Vol 30 (3) ◽  
pp. 435-447
Author(s):  
Daniel W. Tigard

AbstractOur ability to locate moral responsibility is often thought to be a necessary condition for conducting morally permissible medical practice, engaging in a just war, and other high-stakes endeavors. Yet, with increasing reliance upon artificially intelligent systems, we may be facing a widening responsibility gap, which, some argue, cannot be bridged by traditional concepts of responsibility. How then, if at all, can we make use of crucial emerging technologies? According to Colin Allen and Wendell Wallach, the advent of so-called ‘artificial moral agents’ (AMAs) is inevitable. Still, this notion may seem to push back the problem, leaving those who have an interest in developing autonomous technology with a dilemma. We may need to scale-back our efforts at deploying AMAs (or at least maintain human oversight); otherwise, we must rapidly and drastically update our moral and legal norms in a way that ensures responsibility for potentially avoidable harms. This paper invokes contemporary accounts of responsibility in order to show how artificially intelligent systems might be held responsible. Although many theorists are concerned enough to develop artificial conceptions of agency or to exploit our present inability to regulate valuable innovations, the proposal here highlights the importance of—and outlines a plausible foundation for—a workable notion of artificial moral responsibility.


2019 ◽  
Author(s):  
Ana P. Gantman ◽  
Anni Sternisko ◽  
Peter M. Gollwitzer ◽  
Gabriele Oettingen ◽  
Jay Joseph Van Bavel

Moral and immoral actions often involve multiple individuals who play different roles in bringing about the outcome. For example, one agent may deliberate and decide what to do while another may plan and implement that decision. We suggest that the Mindset Theory of Action Phases provides a useful lens through which to understand these cases and the implications that these different roles, which correspond to different mindsets, have for judgments of moral responsibility. In Experiment 1, participants learned about a disastrous oil spill in which one company made decisions about a faulty oil rig, and another installed that rig. Participants judged the company who made decisions as more responsible than the company who implemented them. In Experiment 2 and a direct replication, we tested whether people judge implementers to be morally responsible at all. We examined a known asymmetry in blame and praise. Moral agents received blame for actions that resulted in a bad outcome but not praise for the same action that resulted in a good outcome. We found this asymmetry for deciders but not implementers, an indication that implementers were judged through a moral lens to a lesser extent than deciders. Implications for allocating moral responsibility across multiple agents are discussed.


2019 ◽  
Vol 4 (1) ◽  
pp. 45
Author(s):  
Tyler Kibbey

Descriptivism is a methodologically efficacious framework in the discipline of linguistics. However, it categorically fails to explicitly account for the moral responsibilities of linguists, as moral agents. In so doing, descriptivism has been used as a justification for indifference to instances and systems of linguistic violence, among other moral shortcomings. Specifically, many guidelines for descriptive ethics stipulate that a linguist “do no harm” but do not necessarily require the linguist to prevent harm or mitigate systems of violence. In this paper, I delineate an ethical framework, transcriptivism, which is distinct from research ethics and covers the line of philosophical inquiry related to questions of the moral agency of linguists and their moral responsibility. The potential for this new framework is demonstrated through a case study of conflicting Tennessee language ideologies regarding gender-neutral pronoun usage as well as an analysis of misgendering as an act of linguistic violence.


Author(s):  
Dane Leigh Gogoshin

Contrary to the prevailing view that robots cannot be full-blown members of the larger human moral community, I argue not only that they can but that they would be ideal moral agents in the way that currently counts. While it is true that robots fail to meet a number of criteria which some human agents meet or which all human agents could in theory meet, they earn a perfect score as far as the behavioristic conception of moral agency at work in our moral responsibility practices goes.


Sign in / Sign up

Export Citation Format

Share Document