How Could We Know When a Robot was a Moral Patient?

2021 ◽  
Vol 30 (3) ◽  
pp. 459-471
Author(s):  
Henry Shevlin

AbstractThere is growing interest in machine ethics in the question of whether and under what circumstances an artificial intelligence would deserve moral consideration. This paper explores a particular type of moral status that the author terms psychological moral patiency, focusing on the epistemological question of what sort of evidence might lead us to reasonably conclude that a given artificial system qualified as having this status. The paper surveys five possible criteria that might be applied: intuitive judgments, assessments of intelligence, the presence of desires and autonomous behavior, evidence of sentience, and behavioral equivalence. The author suggests that despite its limitations, the latter approach offers the best way forward, and defends a variant of that, termed the cognitive equivalence strategy. In short, this holds that an artificial system should be considered a psychological moral patient to the extent that it possesses cognitive mechanisms shared with other beings such as nonhuman animals whom we also consider to be psychological moral patients.

Author(s):  
Christian List

AbstractThe aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights and a moral status? I will tentatively defend the (increasingly widely held) view that, under certain conditions, artificial intelligent systems, like corporate entities, might qualify as responsible moral agents and as holders of limited rights and legal personhood. I will further suggest that regulators should permit the use of autonomous artificial systems in high-stakes settings only if they are engineered to function as moral (not just intentional) agents and/or there is some liability-transfer arrangement in place. I will finally raise the possibility that if artificial systems ever became phenomenally conscious, there might be a case for extending a stronger moral status to them, but argue that, as of now, this remains very hypothetical.


Author(s):  
Rhyse Bendell ◽  
Jessica Williams ◽  
Stephen M. Fiore ◽  
Florian Jentsch

Artificial intelligence has been developed to perform all manner of tasks but has not gained capabilities to support social cognition. We suggest that teams comprised of both humans and artificially intelligent agents cannot achieve optimal team performance unless all teammates have the capacity to employ social-cognitive mechanisms. These form the foundation for generating inferences about their counterparts and enable execution of informed, appropriate behaviors. Social intelligence and its utilization are known to be vital components of human-human teaming processes due to their importance in guiding the recognition, interpretation, and use of the signals that humans naturally use to shape their exchanges. Although modern sensors and algorithms could allow AI to observe most social cues, signals, and other indicators, the approximation of human-to-human social interaction -based upon aggregation and modeling of such cues is currently beyond the capacity of potential AI teammates. Partially, this is because humans are notoriously variable. We describe an approach for measuring social-cognitive features to produce the raw information needed to create human agent profiles that can be operated upon by artificial intelligences.


2021 ◽  
pp. 159-178
Author(s):  
Ruth R. Faden ◽  
Tom L. Beauchamp ◽  
Debra J. H. Mathews ◽  
Alan Regenberg

This chapter argues for a need for a theory of moral status that can help to provide solutions to practical problems in public policy that take account of the interests of diverse nonhuman animals. To illustrate this need, the chapter briefly describes two contemporary problems, one in science policy and one in food and climate policy. The first section provides a sketch of a way to think about a tiered or hierarchical theory of moral status that could be fit for such work. The second section considers in some depth the problem of human–nonhuman chimeras. This example is used to illustrate how a hierarchical theory of moral status should prove helpful in framing policy responses to this problem.


2021 ◽  
pp. 306-326
Author(s):  
Carl Shulman ◽  
Nick Bostrom

The minds of biological creatures occupy a small corner of a much larger space of possible minds that could be created once we master the technology of artificial intelligence. Yet many of our moral intuitions and practices are based on assumptions about human nature that need not hold for digital minds. This points to the need for moral reflection as we approach the era of advanced machine intelligence. This chapter focuses on one set of issues, which arise from the prospect of digital minds with superhumanly strong claims to resources and influence. These could arise from the vast collective benefits that mass-produced digital minds could derive from relatively small amounts of resources. Alternatively, they could arise from individual digital minds with superhuman moral status or ability to benefit from resources. Such beings could contribute immense value to the world, and failing to respect their interests could produce a moral catastrophe, while a naive way of respecting them could be disastrous for humanity. A sensible approach requires reforms of our moral norms and institutions along with advance planning regarding what kinds of digital minds we bring into existence.


2018 ◽  
Vol 28 (1) ◽  
pp. 26-39 ◽  
Author(s):  
CAROLYN P. NEUHAUS ◽  
BRENDAN PARENT

Abstract:Gene editors such as CRISPR could be used to create stronger, faster, or more resilient nonhuman animals. This is of keen interest to people who breed, train, race, and profit off the millions of animals used in sport that contribute billions of dollars to legal and illegal economies across the globe. People have tried for millennia to perfect sport animals; CRISPR proposes to do in one generation what might have taken decades previously. Moreover, gene editing may facilitate enhancing animals’ capacities beyond their typical limits. This paper describes the state of animal use and engineering for sport, examines the moral status of animals, and analyzes current and future ethical issues at the intersection of animal use, gene editing, and sports. We argue that animal sport enthusiasts and animal welfarists alike should be concerned about the inevitable use of CRISPR in sport animals. Though in principle CRISPR could be used to improve sport animals’ well-being, we think it is unlikely in practice to do so.


2021 ◽  
pp. 269-289
Author(s):  
Walter Sinnott-Armstrong ◽  
Vincent Conitzer

Philosophers often argue about whether fetuses, animals, or AI systems do or do not have moral status. We will suggest instead that different entities have different degrees of moral status with respect to different moral reasons in different circumstances for different purposes. Recognizing this variability of moral status will help to resolve some but not all debates about the potential moral status of AI systems in particular.


2020 ◽  
Vol 31 (2) ◽  
pp. 74-87 ◽  
Author(s):  
Keng Siau ◽  
Weiyu Wang

Artificial intelligence (AI)-based technology has achieved many great things, such as facial recognition, medical diagnosis, and self-driving cars. AI promises enormous benefits for economic growth, social development, as well as human well-being and safety improvement. However, the low-level of explainability, data biases, data security, data privacy, and ethical problems of AI-based technology pose significant risks for users, developers, humanity, and societies. As AI advances, one critical issue is how to address the ethical and moral challenges associated with AI. Even though the concept of “machine ethics” was proposed around 2006, AI ethics is still in the infancy stage. AI ethics is the field related to the study of ethical issues in AI. To address AI ethics, one needs to consider the ethics of AI and how to build ethical AI. Ethics of AI studies the ethical principles, rules, guidelines, policies, and regulations that are related to AI. Ethical AI is an AI that performs and behaves ethically. One must recognize and understand the potential ethical and moral issues that may be caused by AI to formulate the necessary ethical principles, rules, guidelines, policies, and regulations for AI (i.e., Ethics of AI). With the appropriate ethics of AI, one can then build AI that exhibits ethical behavior (i.e., Ethical AI). This paper will discuss AI ethics by looking at the ethics of AI and ethical AI. What are the perceived ethical and moral issues with AI? What are the general and common ethical principles, rules, guidelines, policies, and regulations that can resolve or at least attenuate these ethical and moral issues with AI? What are some of the necessary features and characteristics of an ethical AI? How to adhere to the ethics of AI to build ethical AI?


2014 ◽  
Vol 23 (2) ◽  
pp. 173-181 ◽  
Author(s):  
TOM BULLER

Abstract:As Colin Allen has argued, discussions between science and ethics about the mentality and moral status of nonhuman animals often stall on account of the fact that the properties that ethics presents as evidence of animal mentality and moral status, namely consciousness and sentience, are not observable “scientifically respectable” properties. In order to further discussion between science and ethics, it seems, therefore, that we need to identify properties that would satisfy both domains.In this article I examine the mentality and moral status of nonhuman animals from the perspective of neuroethics. By adopting this perspective, we can see how advances in neuroimaging regarding (1) research into the neurobiology of pain, (2) “brain reading,” and (3) the minimally conscious state may enable us to identify properties that help bridge the gap between science and ethics, and hence help further the debate about the mentality and moral status of nonhuman animals.


2014 ◽  
Vol 22 (5) ◽  
pp. 439-458
Author(s):  
Sari Ung-Lanki

This article was designed to give insight into the role of biotechnology in redefining the complex human-animal relations of our times. In particular, it is used to examine accounts of nonhuman animals and animal usage in the context of biotechnology, as covered in the leading scientific journalNature Biotechnology. Data consist of editorials, commentaries, and research news for four years (N= 104), and has been analyzed using discourse analysis. The journal constructs a consistent, yet one-sided, view on animals as they are represented through physico-material, technical and biomedical discourses, as well as discourses on human benefits and manageable risks. The biotechnological epistemology of the animal is positioned at the far end of the subjectification-instrumentalization continuum in our treatment of other animals. It also clashes with simultaneous discussions on animal mind, subjectivity, and moral status. These developments are likely to further intensify the discrepancies in human-animal relations in science and society.


Author(s):  
Silviya Serafimova

Abstract Moral implications of the decision-making process based on algorithms require special attention within the field of machine ethics. Specifically, research focuses on clarifying why even if one assumes the existence of well-working ethical intelligent agents in epistemic terms, it does not necessarily mean that they meet the requirements of autonomous moral agents, such as human beings. For the purposes of exemplifying some of the difficulties in arguing for implicit and explicit ethical agents in Moor’s sense, three first-order normative theories in the field of machine ethics are put to test. Those are Powers’ prospect for a Kantian machine, Anderson and Anderson’s reinterpretation of act utilitarianism and Howard and Muntean’s prospect for a moral machine based on a virtue ethical approach. By comparing and contrasting the three first-order normative theories, and by clarifying the gist of the differences between the processes of calculation and moral estimation, the possibility for building what—one might call strong “moral” AI scenarios—is questioned. The possibility of weak “moral” AI scenarios is likewise discussed critically.


Sign in / Sign up

Export Citation Format

Share Document