Artificial virtue: the machine question and perceptions of moral character in artificial moral agents

AI & Society ◽  
2020 ◽  
Vol 35 (4) ◽  
pp. 795-809 ◽  
Author(s):  
Patrick Gamez ◽  
Daniel B. Shank ◽  
Carson Arnold ◽  
Mallory North
2018 ◽  
Vol 9 (1) ◽  
pp. 44-61
Author(s):  
André Schmiljun

With the development of autonomous robots, one day probably capable of speaking, thinking and learning, self-reflecting, sharing emotions, in fact, with the raise of robots becoming artificial moral agents (AMAs) robot scientists like Abney, Veruggio and Petersen are already optimistic that sooner or later we need to call those robots “people” or rather “Artificial People” (AP). The paper rejects this forecast, due to its argument based on three metaphysical conflicting assumptions. Firstly, it is the idea that it is possible to precisely define persons and apply the definition to robots or use it to differentiate human beings from robots. Further, the argument of APs favors a position of non-reductive physicalism (second assumption) and materialism (third assumption), finally producing weird convictions about future robotics. Therefore, I will suggest to follow Christine Korsgaard’s defence of animals as ends in themselves with moral standing. I will show that her argument can be transmitted to robots, too, at least to robots which are capable of pursuing their own good (even if they are not rational). Korsgaard’s interpretation of Kant delivers an option that allows us to leave out complicated metaphysical notions like “person” or “subject” in the debate, without denying robots’ status as agents.


2020 ◽  
pp. 349-359
Author(s):  
Deborah G. Johnson ◽  
Keith W. Miller

Author(s):  
Alan E. Singer

An aspect of the relationship between philosophy and computer engineering is considered, with particular emphasis upon the design of artificial moral agents. Top-down vs. bottom-up approaches to ethical behavior are discussed, followed by an overview of some of the ways in which traditional ethics has informed robotics. Two macro-trends are then identified, one involving the evolution of moral consciousness in man and machine, the other involving the fading away of the boundary between the real and the virtual.


2019 ◽  
Vol 26 (2) ◽  
pp. 501-532 ◽  
Author(s):  
José-Antonio Cervantes ◽  
Sonia López ◽  
Luis-Felipe Rodríguez ◽  
Salvador Cervantes ◽  
Francisco Cervantes ◽  
...  

2020 ◽  
Vol 64 ◽  
pp. 117-125
Author(s):  
Salvador Cervantes ◽  
Sonia López ◽  
José-Antonio Cervantes

2007 ◽  
Vol 7 ◽  
pp. 129-134
Author(s):  
Michael Nagenborg

In this paper I will argue that artificial moral agents (AMAs) are a fitting subject of intercultural information ethics because of the impact they may have on the relationship between information rich and information poor countries. I will give a limiting definition of AMAs first, and discuss two different types of AMAs with different implications from an intercultural perspective. While AMAs following preset rules might raise con-cerns about digital imperialism, AMAs being able to adjust to their user‘s behavior will lead us to the question what makes an AMA ?moral?? I will argue that this question does present a good starting point for an inter-cultural dialogue which might be helpful to overcome the notion of Africa as a mere victim.


Author(s):  
Laura L. Pană

We live today in a partially artificial intelligent environment in which human intelligent agents are accompanied and assisted by artificial intelligent agents, continually endowed with more functions, skills and even competences, and having a more significant involvement and influence in the social environment. Therefore, artificial agents need to become and moral agents. Human and artificial intelligent agents are cooperating in various complex activities and thus they develop some common characteristics and properties. These features, in turn, are changing and progressing together with several increasing requirements of the different types of activities. All these changes produce a common evolution of human and artificial intelligent agents. Under these new conditions, human and artificial agents need and a shared ethics. Artificial ethics can be philosophically grounded, scientifically developed and technically implemented, it will be a more clear, coherent and consistent ethics, suitable for both human and artificial moral agents, and will be the first effective ethics.


Sign in / Sign up

Export Citation Format

Share Document