moral patiency
Recently Published Documents


TOTAL DOCUMENTS

10
(FIVE YEARS 7)

H-INDEX

2
(FIVE YEARS 0)

AI and Ethics ◽  
2021 ◽  
Author(s):  
Alistair Knott ◽  
Mark Sagar ◽  
Martin Takac

AbstractAs AI advances, models of simulated humans are becoming increasingly realistic. A new debate has arisen about the ethics of interacting with these realistic agents—and in particular, whether any harms arise from ‘mistreatment’ of such agents. In this paper, we advance this debate by discussing a model we have developed (‘BabyX’), which simulates a human infant. The model produces realistic behaviours—and it does so using a schematic model of certain human brain mechanisms. We first consider harms that may arise due to effects on the user—in particular effects on the user’s behaviour towards real babies. We then consider whether there’s any need to consider harms from the ‘perspective’ of the simulated baby. The first topic raises practical ethical questions, many of which are empirical in nature. We argue the potential for harm is real enough to warrant restrictions on the use of BabyX. The second topic raises a very different set of questions in the philosophy of mind. Here, we argue that BabyX’s biologically inspired model of emotions raises important moral questions, and places BabyX in a different category from avatars whose emotional behaviours are ‘faked’ by simple rules. This argument counters John Danaher’s recently proposed ‘moral behaviourism’. We conclude that the developers of simulated humans have useful contributions to make to debates about moral patiency—and also have certain new responsibilities in relation to the simulations they build.


2021 ◽  
Vol 8 ◽  
Author(s):  
Kamil Mamak

Proponents of welcoming robots into the moral circle have presented various approaches to moral patiency under which determining the moral status of robots seems possible. However, even if we recognize robots as having moral standing, how should we situate them in the hierarchy of values? In particular, who should be sacrificed in a moral dilemma–a human or a robot? This paper answers this question with reference to the most popular approaches to moral patiency. However, the conclusions of a survey on moral patiency do not consider another important factor, namely the law. For now, the hierarchy of values is set by law, and we must take that law into consideration when making decisions. I demonstrate that current legal systems prioritize human beings and even force the active protection of humans. Recent studies have suggested that people would hesitate to sacrifice robots in order to save humans, yet doing so could be a crime. This hesitancy is associated with the anthropomorphization of robots, which are becoming more human-like. Robots’ increasing similarity to humans could therefore lead to the endangerment of humans and the criminal responsibility of others. I propose two recommendations in terms of robot design to ensure the supremacy of human life over that of humanoid robots.


2021 ◽  
Vol 30 (3) ◽  
pp. 459-471
Author(s):  
Henry Shevlin

AbstractThere is growing interest in machine ethics in the question of whether and under what circumstances an artificial intelligence would deserve moral consideration. This paper explores a particular type of moral status that the author terms psychological moral patiency, focusing on the epistemological question of what sort of evidence might lead us to reasonably conclude that a given artificial system qualified as having this status. The paper surveys five possible criteria that might be applied: intuitive judgments, assessments of intelligence, the presence of desires and autonomous behavior, evidence of sentience, and behavioral equivalence. The author suggests that despite its limitations, the latter approach offers the best way forward, and defends a variant of that, termed the cognitive equivalence strategy. In short, this holds that an artificial system should be considered a psychological moral patient to the extent that it possesses cognitive mechanisms shared with other beings such as nonhuman animals whom we also consider to be psychological moral patients.


2021 ◽  
Vol 8 ◽  
Author(s):  
Jaime Banks

Moral status can be understood along two dimensions: moral agency [capacities to be and do good (or bad)] and moral patiency (extents to which entities are objects of moral concern), where the latter especially has implications for how humans accept or reject machine agents into human social spheres. As there is currently limited understanding of how people innately understand and imagine the moral patiency of social robots, this study inductively explores key themes in how robots may be subject to humans’ (im)moral action across 12 valenced foundations in the moral matrix: care/harm, fairness/unfairness, loyalty/betrayal, authority/subversion, purity/degradation, liberty/oppression. Findings indicate that people can imagine clear dynamics by which anthropomorphic, zoomorphic, and mechanomorphic robots may benefit and suffer at the hands of humans (e.g., affirmations of personhood, compromising bodily integrity, veneration as gods, corruption by physical or information interventions). Patterns across the matrix are interpreted to suggest that moral patiency may be a function of whether people diminish or uphold the ontological boundary between humans and machines, though even moral upholdings bare notes of utilitarianism.


2021 ◽  
Vol 29 ◽  
Author(s):  
Howard Nye ◽  
Tugba Yolbas

In this paper, we defend three claims about what it will take for an AI system to be a basic moral patient to whom we can owe duties of non-maleficence not to harm her and duties of beneficence to benefit her: (1) Moral patients are mental patients; (2) Mental patients are true intentional systems; and (3) True intentional systems are systematically flexible. We suggest that we should be particularly alert to the possibility of such systematically flexible true intentional systems developing in the areas of exploratory robots and artificial personal assistants. Finally, we argue that in light of our failure to respect the well-being of existing biological moral patients and worries about our limited resources, there are compelling moral reasons to treat artificial moral patiency as something to be avoided at least for now.


Author(s):  
Anna Strasser

This paper investigates reasons to argue for social norms regulating our behavior towards artificial agents. By problematizing the assertion that moral agency is, in principle, a necessary prerequisite for any form of moral patiency, reasons are examined which are independent of attributing moral agency to artificial agents, but which speak for morally appropriate behavior towards artificial systems. Suggesting a consequentialist strategy, potential negative impacts of human-machine interactions are analyzed with a focus on factors that support a transfer of behavioral patterns from human-machine interactions to human-human interactions.


AI & Society ◽  
2017 ◽  
Vol 34 (1) ◽  
pp. 129-136 ◽  
Author(s):  
John Danaher
Keyword(s):  

2016 ◽  
Vol 56 (2) ◽  
pp. 293-313 ◽  
Author(s):  
Maria Giuseppina Pacilli ◽  
Stefano Pagliaro ◽  
Steve Loughnan ◽  
Sarah Gramazio ◽  
Federica Spaccatini ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document