moral agents
Recently Published Documents


TOTAL DOCUMENTS

404
(FIVE YEARS 133)

H-INDEX

19
(FIVE YEARS 3)

Think ◽  
2021 ◽  
Vol 21 (60) ◽  
pp. 57-64
Author(s):  
C. P. Ruloff ◽  
Patrick Findler

Hsiao has recently developed what he considers a ‘simple and straightforward’ argument for the moral permissibility of corporal punishment. In this article we argue that Hsiao's argument is seriously flawed for at least two reasons. Specifically, we argue that (i) a key premise of Hsiao's argument is question-begging, and (ii) Hsiao's argument depends upon a pair of false underlying assumptions, namely, the assumption that children are moral agents, and the assumption that all forms of wrongdoing demand retribution.


2021 ◽  
pp. 18-28
Author(s):  
Jay L. Garfield

This chapter argues that Buddhist ethics does not fit into any of the standard Western metaethical theories. It is neither an instance of a virtue theory, nor of a deontological theory, nor of a consequentialist theory. It is closer to a sentimentalist theory, but different from those as well. Instead, it defends a reading of Buddhist ethics as a moral phenomenology and as particularist, utilizing casuistic reasoning. That is, Buddhist ethics is concerned primarily with the transformation of experience, of the way we perceive ourselves and other moral agents and patients. This chapter also argues that the metaphor of path structures Buddhist ethical thought.


Entropy ◽  
2021 ◽  
Vol 24 (1) ◽  
pp. 10
Author(s):  
Luís Moniz Pereira ◽  
The Anh Han ◽  
António Barata Lopes

We present a summary of research that we have conducted employing AI to better understand human morality. This summary adumbrates theoretical fundamentals and considers how to regulate development of powerful new AI technologies. The latter research aim is benevolent AI, with fair distribution of benefits associated with the development of these and related technologies, avoiding disparities of power and wealth due to unregulated competition. Our approach avoids statistical models employed in other approaches to solve moral dilemmas, because these are “blind” to natural constraints on moral agents, and risk perpetuating mistakes. Instead, our approach employs, for instance, psychologically realistic counterfactual reasoning in group dynamics. The present paper reviews studies involving factors fundamental to human moral motivation, including egoism vs. altruism, commitment vs. defaulting, guilt vs. non-guilt, apology plus forgiveness, counterfactual collaboration, among other factors fundamental in the motivation of moral action. These being basic elements in most moral systems, our studies deliver generalizable conclusions that inform efforts to achieve greater sustainability and global benefit, regardless of cultural specificities in constituents.


2021 ◽  
Vol 12 (4) ◽  
pp. 423-454
Author(s):  
Alexandru Gabriel Cioiu

In the human enhancement literature, there is a recurrent fear that biomedical technologies will negatively impact the autonomy and authenticity of moral agents, even when the agents would end up having better capacities and an improved life with the aid of these technologies. I will explore several ways in which biomedical enhancement may improve the autonomy of moral agents and try to show that biomedical methods are, all things considered, beneficial to our autonomy and authenticity. I will argue that there are instances when it’s desirable to limit the autonomy of moral agents and that strict regulations are to be put in place if a great number of people will have easy access to powerful, genetic-altering technologies which can impact the life of future children. I will advocate for using assisted reproductive technologies in order to select the child with the best chance of the best moral life and in doing so I will analyse several procreative principles which have been proposed by different scholars in the genetic enhancement debate and try to determine which one would be best to adhere to. Usually, people place high value on the concept of autonomy and there are many cases in which they end up overestimating autonomy in relation to other moral values. While autonomy is important, it’s also important to know how to limit it when reasonable societal norms require it. Sometimes autonomy is defined in strong connection with the concept of authenticity, in the sense that it’s not sufficient for our choices to be autonomous if they are not also authentic. I will try to defend the idea that authenticity can be enhanced as well with the aid of enhancement technologies which can actually prove beneficial in our quest to improve our own self.


AI & Society ◽  
2021 ◽  
Author(s):  
Jakob Stenseke

AbstractVirtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To achieve this goal, we critically explore the possibilities and challenges for virtue ethics from a computational perspective. Drawing on previous conceptual and technical work, we outline a version of artificial virtue based on moral functionalism, connectionist bottom–up learning, and eudaimonic reward. We then describe how core features of the outlined theory can be interpreted in terms of functionality, which in turn informs the design of components necessary for virtuous cognition. Finally, we present a comprehensive framework for the technical development of artificial virtuous agents and discuss how they can be implemented in moral environments.


Author(s):  
Sven Nyholm

The rapid introduction of different kinds of robots and other machines with artificial intelligence into different domains of life raises the question of whether robots can be moral agents and moral patients. In other words, can robots perform moral actions? Can robots be on the receiving end of moral actions? To explore these questions, this chapter relates the new area of the ethics of human–robot interaction to traditional ethical theories such as utilitarianism, Kantian ethics, and virtue ethics. These theories were developed with the assumption that the paradigmatic examples of moral agents and moral patients are human beings. As this chapter argues, this creates challenges for anybody who wishes to extend the traditional ethical theories to new questions of whether robots can be moral agents and/or moral patients.


2021 ◽  
Vol 11 (3-4) ◽  
pp. 217-230
Author(s):  
Francisco Lara

Abstract Utilitarianism has been able to respond to many of the objections raised against it by undertaking a major revision of its theory. Basically, this consisted of recognising that its early normative propositions were only viable for agents very different from flesh-and-blood humans. They then deduced that, given human limitations, it was most useful for everyone if moral agents did not behave as utilitarians and habitually followed certain rules. Important recent advances in neurotechnology suggest that some of these human limitations can be overcome. In this article, after presenting some possible neuro-enhancements, we seek to answer the questions, first, of whether they should be accepted by a utilitarian ethic and, second, if accepted, to what extent they would invalidate the revision that allowed them to escape the objections.


2021 ◽  
pp. 1-16
Author(s):  
Eli Shupe

Abstract There has been recent speculation that some (nonhuman) animals are moral agents. Using a retributivist framework, I argue that if some animals are moral agents, then there are circumstances in which some of them deserve punishment. But who is best situated to punish animal wrongdoers? This paper explores the idea that the answer to this question is humans.


2021 ◽  
pp. 134-154
Author(s):  
Paddy Jane McShane

The main aim of this chapter is to explore the importance of moral testimony for testifiers. Up to now, writers on moral testimony have by and large focused on how moral testimony impacts dependers. And, in doing so, they’ve tended to theorize about moral testimony assuming a rather abstracted picture of the testifier according to which all that really matters about her is that she’s a credible source. In contrast, this chapter shows how paying attention to the fact that testifiers are not just potential informants but also socially embedded moral agents helps us to discern heretofore unrecognized ways that moral testimony is valuable. More specifically, this chapter argues that dependence on moral testimony is valuable because it can promote the moral development of testifiers. Furthermore, dependence on moral testimony can be a way of respecting and standing by those who are oppressed in the face of their systematic moral subordination. And, finally, for oppressed persons, giving moral testimony can function as a way of resisting oppressive constructions of identity and expressing and retaining self-respect.


Sign in / Sign up

Export Citation Format

Share Document