scholarly journals Three Risks That Caution Against a Premature Implementation of Artificial Moral Agents for Practical and Economical Use

2021 ◽  
Vol 27 (1) ◽  
Author(s):  
Christian Herzog

AbstractIn the present article, I will advocate caution against developing artificial moral agents (AMAs) based on the notion that the utilization of preliminary forms of AMAs will potentially negatively feed back on the human social system and on human moral thought itself and its value—e.g., by reinforcing social inequalities, diminishing the breadth of employed ethical arguments and the value of character. While scientific investigations into AMAs pose no direct significant threat, I will argue against their premature utilization for practical and economical use. I will base my arguments on two thought experiments. The first thought experiment deals with the potential to generate a replica of an individual’s moral stances with the purpose to increase, what I term, ’moral efficiency’. Hence, as a first risk, an unregulated utilization of premature AMAs in a neoliberal capitalist system is likely to disadvantage those who cannot afford ’moral replicas’ and further reinforce social inequalities. The second thought experiment deals with the idea of a ’moral calculator’. As a second risk, I will argue that, even as a device equally accessible to all and aimed at augmenting human moral deliberation, ’moral calculators’ as preliminary forms of AMAs are likely to diminish the breadth and depth of concepts employed in moral arguments. Again, I base this claim on the idea that the current most dominant economic system rewards increases in productivity. However, increases in efficiency will mostly stem from relying on the outputs of ’moral calculators’ without further scrutiny. Premature AMAs will cover only a limited scope of moral argumentation and, hence, over-reliance on them will narrow human moral thought. In addition and as the third risk, I will argue that an increased disregard of the interior of a moral agent may ensue—a trend that can already be observed in the literature.

2017 ◽  
Vol 21 (3) ◽  
pp. 281-298
Author(s):  
Christopher Falzon

This article looks at the 2014 Swedish comedy-drama Force Majeure as a kind of moral thought experiment, but also insofar as it might not fit such a model. The idea of a cinematic ethics, of cinema as providing an avenue for thinking through ethics and exploring ethical questions, finds at least one expression in the idea of film as experimental in this sense. At the same time, simply subsuming film to the philosophical thought experiment risks forgetting what film itself brings to the proceedings; and how the cinematic medium might allow for an experimentation that goes beyond what can be done within the philosophical text. As experimental in a broad sense, Force Majeure evokes an experience, the extraordinary event beyond one's control, capable of putting a moral agent to the test, challenging one's sense of who one is and what one stands for. The film unfolds as a reflection on the results of this encounter with experience, and on the kind of moral self this experiment brings to light; and in the course of this reflection, it suggests some general conclusions about the human condition.


2012 ◽  
Vol 32 (2) ◽  
pp. 242
Author(s):  
Michelle Ciurria

In traditional analytic philosophy, critical thinking is defined along Cartesian lines as rational and linear reasoning preclusive of intuitions, emotions and lived experience. According to Michael Gilbert, this view – which he calls the Natural Light Theory (NLT) – fails because it arbitrarily excludes standard feminist forms of argumentation and neglects the essentially social nature of argumentation. In this paper, I argue that while Gilbert’s criticism is correct for argumentation in general, NLT fails in a distinctive and particularly problematic manner in moral argumentation contexts. This is because NLT calls for disputants to adopt an impartial attitude, which overlooks the fact that moral disputants qua moral agents are necessarily partial to their own values and interests. Adopting the impartial perspective would therefore alienate them from their values and interests, causing a kind of “moral schizophrenia.” Finally, I urge a re-valuation of epistemic virtue in argumentation.


2018 ◽  
Vol 9 (1) ◽  
pp. 44-61
Author(s):  
André Schmiljun

With the development of autonomous robots, one day probably capable of speaking, thinking and learning, self-reflecting, sharing emotions, in fact, with the raise of robots becoming artificial moral agents (AMAs) robot scientists like Abney, Veruggio and Petersen are already optimistic that sooner or later we need to call those robots “people” or rather “Artificial People” (AP). The paper rejects this forecast, due to its argument based on three metaphysical conflicting assumptions. Firstly, it is the idea that it is possible to precisely define persons and apply the definition to robots or use it to differentiate human beings from robots. Further, the argument of APs favors a position of non-reductive physicalism (second assumption) and materialism (third assumption), finally producing weird convictions about future robotics. Therefore, I will suggest to follow Christine Korsgaard’s defence of animals as ends in themselves with moral standing. I will show that her argument can be transmitted to robots, too, at least to robots which are capable of pursuing their own good (even if they are not rational). Korsgaard’s interpretation of Kant delivers an option that allows us to leave out complicated metaphysical notions like “person” or “subject” in the debate, without denying robots’ status as agents.


Author(s):  
Vinit Haksar

Moral agents are those agents expected to meet the demands of morality. Not all agents are moral agents. Young children and animals, being capable of performing actions, may be agents in the way that stones, plants and cars are not. But though they are agents they are not automatically considered moral agents. For a moral agent must also be capable of conforming to at least some of the demands of morality. This requirement can be interpreted in different ways. On the weakest interpretation it will suffice if the agent has the capacity to conform to some of the external requirements of morality. So if certain agents can obey moral laws such as ‘Murder is wrong’ or ‘Stealing is wrong’, then they are moral agents, even if they respond only to prudential reasons such as fear of punishment and even if they are incapable of acting for the sake of moral considerations. According to the strong version, the Kantian version, it is also essential that the agents should have the capacity to rise above their feelings and passions and act for the sake of the moral law. There is also a position in between which claims that it will suffice if the agent can perform the relevant act out of altruistic impulses. Other suggested conditions of moral agency are that agents should have: an enduring self with free will and an inner life; understanding of the relevant facts as well as moral understanding; and moral sentiments, such as capacity for remorse and concern for others. Philosophers often disagree about which of these and other conditions are vital; the term moral agency is used with different degrees of stringency depending upon what one regards as its qualifying conditions. The Kantian sense is the most stringent. Since there are different senses of moral agency, answers to questions like ‘Are collectives moral agents?’ depend upon which sense is being used. From the Kantian standpoint, agents such as psychopaths, rational egoists, collectives and robots are at best only quasi-moral, for they do not fulfil some of the essential conditions of moral agency.


2020 ◽  
pp. 349-359
Author(s):  
Deborah G. Johnson ◽  
Keith W. Miller

Author(s):  
Alan E. Singer

An aspect of the relationship between philosophy and computer engineering is considered, with particular emphasis upon the design of artificial moral agents. Top-down vs. bottom-up approaches to ethical behavior are discussed, followed by an overview of some of the ways in which traditional ethics has informed robotics. Two macro-trends are then identified, one involving the evolution of moral consciousness in man and machine, the other involving the fading away of the boundary between the real and the virtual.


2019 ◽  
Vol 26 (2) ◽  
pp. 501-532 ◽  
Author(s):  
José-Antonio Cervantes ◽  
Sonia López ◽  
Luis-Felipe Rodríguez ◽  
Salvador Cervantes ◽  
Francisco Cervantes ◽  
...  

2020 ◽  
Vol 64 ◽  
pp. 117-125
Author(s):  
Salvador Cervantes ◽  
Sonia López ◽  
José-Antonio Cervantes

2018 ◽  
Vol 1 (1) ◽  
pp. 59-78
Author(s):  
Drew M. Dalton

AbstractObjects are inert, passive, devoid of will, and as such bear no intrinsic value or moral worth. This claim is supported by the argument that to be considered a moral agent one must have a conscious will and be sufficiently free to act in accordance with that will. Since material objects, it is assumed, have no active will nor freedom, they should not be considered moral agents nor bearers of intrinsic ethical vale. Thus, the apparent “moral neutrality” of objects rests upon a kind of subject/object or mind/body dualism. The aim of this paper is to explore two paths by which western thought can escape this dualism, re-valuate the alleged “moral neutrality” of material objects, and initiate a sort of “object oriented ethics,” albeit with surprising results. To do so, this paper explores the work of Arthur Schopenhauer and Baruch Spinoza to interrogate both the claim that material objects have no will and that freedom is the necessary condition for ethical responsibility. This paper concludes by arguing that not only should objects been seen as bearers of their own ethical value, a determinate judgement can be made regarding that value through a basic understanding of the laws of physics.


Sign in / Sign up

Export Citation Format

Share Document