Moral Recognition and the Limits of Impartialist Ethics

2021 ◽  
pp. 123-138
Author(s):  
Udo Schuklenk

‘Moral status’ is simply a convenient label for ‘is owed moral consideration of a kind’. This chapter argues that we should abandon it and instead focus on the question of what kinds of dispositional capabilities, species memberships, relationships etc., constitute ethically defensible criteria that justifiably trigger particular kinds of moral obligations. Chimeras, human brain organoids, and artificial intelligence do not pose new challenges. Existing conceptual frameworks, and the criteria for moral consideration that they trigger (species membership, sentientism, personhood) are still defensible and applicable. The challenge at hand is arguably an empirical challenge that philosophers and ethicists qua philosophers and ethicists are ill equipped to handle. The challenge that needs addressing is essentially whether a self-learning AI machine, that responds exactly in the same way to a particular event as a person or sentient being would, should be treated as if it was such a person or sentient being, despite doubts about its de facto lack of dispositional capabilities that would normally give rise to such responses.

Common-sense morality implicitly assumes that reasonably clear distinctions can be drawn between the ‘full’ moral status usually attributed to ordinary adult humans, the partial moral status attributed to non-human animals, and the absence of moral status, usually ascribed to machines and other artefacts. These assumptions were always subject to challenge; but they now come under renewed pressure because there are beings we are now able to create, and beings we may soon be able to create, which blur traditional distinctions between humans, non-human animals, and non-biological beings. Examples are human non-human chimeras, cyborgs, human brain organoids, post-humans, human minds that have been uploaded into computers and onto the internet, and artificial intelligence. It is far from clear what moral status we should attribute to any of these beings. While commonsensical views of moral status have always been questioned, the latest technological developments recast many of the questions and raise additional objections. There are a number of ways we could respond, such as revising our ordinary suppositions about the prerequisites for full moral status. We might also reject the assumption that there is a sharp distinction between full and partial moral status. The present volume provides a forum for philosophical reflection about the usual presuppositions and intuitions about moral status, especially in light of the aforementioned recent and emerging technological advances.


Author(s):  
Amit Mishra

Education and learning are the most important aspects of the evolution of societies. They have been a favorite subject for philosophers and psychologists to work upon. Same questions are now being re-dealt by computer scientists in current scenarios. Although evolution is a continuous process, the pace of evolution is not a linear graph. Children acquire a huge amount of knowledge with very little input from teachers, friends, parents, and surroundings. Understanding how human brain works and more precisely, how the child brain actually functions is opening the path of researches in artificial intelligence (AI).


2021 ◽  
pp. 1-20
Author(s):  
Steve Clarke ◽  
Julian Savulescu

Recent technological developments and potential technological developments of the near future require us to try to think clearly about what it is to have moral status and about when and why we should attribute moral status to beings and entities. What should we say about the moral status of human non-human chimeras, human brain organoids, artificial intelligence, cyborgs, post-humans, and human minds that have been uploaded into a computer, or onto the internet? In this introductory chapter we survey some key assumptions ordinarily made about moral status that may require rethinking. These include the assumptions that all humans who are not severely cognitively impaired have equal moral status, that possession of the sophisticated cognitive capacities typical of human adults is necessary for full moral status, that only humans can have full moral status, and that there can be no beings with higher moral status than ordinary adult humans. We also need to consider how we should treat beings and entities when we find ourselves uncertain about their moral status.


Author(s):  
David J. Gunkel

One of the enduring concerns of ethics is determining who is deserving of moral consideration. Although initially limited to “other men,” ethics has developed in such a way that it challenges its own restrictions and comes to encompass what had been previously excluded entities. Currently, we stand on the verge of another fundamental challenge to moral thinking. This challenge comes from the autonomous and increasingly intelligent machines of our own making, and it puts in question many deep-seated assumptions about who or what can be a moral subject. This chapter examines whether machines can have rights. Because a response to this query primarily depends on how one characterizes “moral status,” it is organized around two established moral principles, considers how these principles apply to artificial intelligence and robots, and concludes by providing suggestions for further study.


2018 ◽  
Vol 40 (4) ◽  
pp. 363-370
Author(s):  
Edward Uzoma Ezedike ◽  

Kant’s doctrine of the “categorical imperative” with respect to ratiocentrism needs to be examined for its implications for environmental ethics. Kant’s argument is that moral actions must be categorical or unqualified imperatives that reflect the sovereignty of moral obligations that all rational moral agents could figure out by virtue of their rationality. For Kant, humans have no direct moral obligations to non-rational, nonhuman nature: only rational beings, i.e., humans, are worthy of moral consideration. I argue that this position is excessively anthropocentric and ratiocentric in excluding the nonhuman natural world from moral consideration. While conceding that nonhuman nature is instrumentally valuable owing to some inevitable existential, ontological considerations, moral obligation should be extended to the natural world in order to achieve environmental wholeness.


2020 ◽  
Vol 48 (1) ◽  
pp. 177-200
Author(s):  
Henry Shevlin ◽  

Most people will grant that we bear special moral obligations toward at least some nonhuman animals that we do not bear toward inanimate objects like stones, mountains, or works of art (however priceless). These moral obligations are plausibly grounded in the fact that many if not all nonhuman animals share important psychological states and capacities with us, such as consciousness, suffering, and goal-directed behavior. But which of these states and capacities are really critical for a creature’s possessing moral status, and how can we determine which animals do in fact have them? In this paper, I examine three main approaches to answering these questions. First are what I term consciousness-based approaches that tackles these questions by first asking which animals are conscious. Second are affective-state approaches that focus on identifying behavioural and physiological signatures of states like pain, fear, and stress. Finally, I consider what I call preference-based approaches whose focus is on the question of which organisms have robust motivational states. I examine the prospects and challenges—both theoretical and empirical—faced by these seemingly contrasting methodologies. I go on to suggest that there are reasons why, despite challenges, we should be robustly committed to the project of identifying psychological grounds of moral status. I conclude by suggesting we should also take seriously the idea of pluralism about moral status, according to which each of these approaches might be capable of providing independent grounds for moral consideration.


Author(s):  
Christian List

AbstractThe aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights and a moral status? I will tentatively defend the (increasingly widely held) view that, under certain conditions, artificial intelligent systems, like corporate entities, might qualify as responsible moral agents and as holders of limited rights and legal personhood. I will further suggest that regulators should permit the use of autonomous artificial systems in high-stakes settings only if they are engineered to function as moral (not just intentional) agents and/or there is some liability-transfer arrangement in place. I will finally raise the possibility that if artificial systems ever became phenomenally conscious, there might be a case for extending a stronger moral status to them, but argue that, as of now, this remains very hypothetical.


Polymers ◽  
2021 ◽  
Vol 13 (2) ◽  
pp. 312
Author(s):  
Naruki Hagiwara ◽  
Shoma Sekizaki ◽  
Yuji Kuwahara ◽  
Tetsuya Asai ◽  
Megumi Akai-Kasaya

Networks in the human brain are extremely complex and sophisticated. The abstract model of the human brain has been used in software development, specifically in artificial intelligence. Despite the remarkable outcomes achieved using artificial intelligence, the approach consumes a huge amount of computational resources. A possible solution to this issue is the development of processing circuits that physically resemble an artificial brain, which can offer low-energy loss and high-speed processing. This study demonstrated the synaptic functions of conductive polymer wires linking arbitrary electrodes in solution. By controlling the conductance of the wires, synaptic functions such as long-term potentiation and short-term plasticity were achieved, which are similar to the manner in which a synapse changes the strength of its connections. This novel organic artificial synapse can be used to construct information-processing circuits by wiring from scratch and learning efficiently in response to external stimuli.


BioTech ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 15
Author(s):  
Takis Vidalis

The involvement of artificial intelligence in biomedicine promises better support for decision-making both in conventional and research medical practice. Yet two important issues emerge in relation to personal data handling, and the influence of AI on patient/doctor relationships. The development of AI algorithms presupposes extensive processing of big data in biobanks, for which procedures of compliance with data protection need to be ensured. This article addresses this problem in the framework of the EU legislation (GDPR) and explains the legal prerequisites pertinent to various categories of health data. Furthermore, the self-learning systems of AI may affect the fulfillment of medical duties, particularly if the attending physicians rely on unsupervised applications operating beyond their direct control. The article argues that the patient informed consent prerequisite plays a key role here, not only in conventional medical acts but also in clinical research procedures.


Sign in / Sign up

Export Citation Format

Share Document