The Ethics of Human–Robot Interaction and Traditional Moral Theories

Author(s):  
Sven Nyholm

The rapid introduction of different kinds of robots and other machines with artificial intelligence into different domains of life raises the question of whether robots can be moral agents and moral patients. In other words, can robots perform moral actions? Can robots be on the receiving end of moral actions? To explore these questions, this chapter relates the new area of the ethics of human–robot interaction to traditional ethical theories such as utilitarianism, Kantian ethics, and virtue ethics. These theories were developed with the assumption that the paradigmatic examples of moral agents and moral patients are human beings. As this chapter argues, this creates challenges for anybody who wishes to extend the traditional ethical theories to new questions of whether robots can be moral agents and/or moral patients.

2019 ◽  
Vol 7 (6) ◽  
pp. 1040-1047
Author(s):  
Rajesh K ◽  
Rajasekaran V

Purpose of the study: The present study mainly argues the limitations of normative ethics and analyzes the anthropocentrism in Kim Stanley Robinson’s 2312 based on the actions or duties of the characters. Methodology: The article used normative ethics as a methodology. Normative ethics is the study of ethical actions that has certain rules and regulations about how we ought to do and decide. So, this study has chosen a normative ethic that consists of three ethical theories Utilitarian approach, Kantian ethics and Virtue ethics to judge duties that are right and wrong.   Main Findings: As a result, normative ethics compact with a one-dimensional approach. All three ethics deal with its own specific code of ethics. Utilitarianism has focused on good outcomes. Kantian ethics has paid attention to good rules with duty. Virtue ethics focused on the good people but all three theories have a strong common objective of focusing on only human beings (sentient entities) and omit other entities (plants and animals). So all normative ethics have certain limitations and do their duties without thinking about consequences and situations. In conclusion, this code of normative ethics has provoked as anthropocentric. In addition that Swan’s actions and the rational behavior made her miserably failed in Mercury through the construction of the biome and creation of quantum computers. So this cause, in the end, the space people want to move from space to earth to rebuild the biome. Applications of this study: The prudent study analyses the normative ethics in a detailed manner under the Utilitarian approach, Kantian ethics and Virtue ethics. These philosophical domains can be benefitted for researchers to practice and implement during the research process in Humanities and Social Sciences especially. Novelty/Originality of this study: The study analyzed the anthropocentric attitude of the character Swan in 2312 based on her actions or duties through the code of normative ethics (Utilitarianism, Kantian ethics and Virtue ethics).


AI Magazine ◽  
2015 ◽  
Vol 36 (3) ◽  
pp. 107-112
Author(s):  
Adam B. Cohen ◽  
Sonia Chernova ◽  
James Giordano ◽  
Frank Guerin ◽  
Kris Hauser ◽  
...  

The AAAI 2014 Fall Symposium Series was held Thursday through Saturday, November 13–15, at the Westin Arlington Gateway in Arlington, Virginia adjacent to Washington, DC. The titles of the seven symposia were Artificial Intelligence for Human-Robot Interaction, Energy Market Prediction, Expanding the Boundaries of Health Informatics Using AI, Knowledge, Skill, and Behavior Transfer in Autonomous Robots, Modeling Changing Perspectives: Reconceptualizing Sensorimotor Experiences, Natural Language Access to Big Data, and The Nature of Humans and Machines: A Multidisciplinary Discourse. The highlights of each symposium are presented in this report.


AI Magazine ◽  
2017 ◽  
Vol 37 (4) ◽  
pp. 83-88
Author(s):  
Christopher Amato ◽  
Ofra Amir ◽  
Joanna Bryson ◽  
Barbara Grosz ◽  
Bipin Indurkhya ◽  
...  

The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University's Department of Computer Science, presented the 2016 Spring Symposium Series on Monday through Wednesday, March 21-23, 2016 at Stanford University. The titles of the seven symposia were (1) AI and the Mitigation of Human Error: Anomalies, Team Metrics and Thermodynamics; (2) Challenges and Opportunities in Multiagent Learning for the Real World (3) Enabling Computing Research in Socially Intelligent Human-Robot Interaction: A Community-Driven Modular Research Platform; (4) Ethical and Moral Considerations in Non-Human Agents; (5) Intelligent Systems for Supporting Distributed Human Teamwork; (6) Observational Studies through Social Media and Other Human-Generated Content, and (7) Well-Being Computing: AI Meets Health and Happiness Science.


2019 ◽  
Vol 30 (1) ◽  
pp. 7-8
Author(s):  
Dora Maria Ballesteros

Artificial intelligence (AI) is an interdisciplinary subject in science and engineering that makes it possible for machines to learn from data. Artificial Intelligence applications include prediction, recommendation, classification and recognition, object detection, natural language processing, autonomous systems, among others. The topics of the articles in this special issue include deep learning applied to medicine [1, 3], support vector machine applied to ecosystems [2], human-robot interaction [4], clustering in the identification of anomalous patterns in communication networks [5], expert systems for the simulation of natural disaster scenarios [6], real-time algorithms of artificial intelligence [7] and big data analytics for natural disasters [8].


Author(s):  
Michael Slote

Moral psychology as a discipline is centrally concerned with psychological issues that arise in connection with the moral evaluation of actions. It deals with the psychological presuppositions of valid morality, that is, with assumptions it seems necessary for us to make in order for there to be such a thing as objective or binding moral requirements: for example, if we lack free will or are all incapable of unselfishness, then it is not clear how morality can really apply to human beings. Moral psychology also deals with what one might call the psychological accompaniments of actual right, or wrong, action, for example, with questions about the nature and possibility of moral weakness or self-deception, and with questions about the kinds of motives that ought to motivate moral agents. Moreover, in the approach to ethics known as ‘virtue ethics’ questions about right and wrong action merge with questions about the motives, dispositions, and abilities of moral agents, and moral psychology plays a more central role than it does in other forms of ethical theory.


Author(s):  
Hilde Lindemann

This chapter focuses on the three moral theories that have long dominated Western philosophy. The overviews of social contract theory, utilitarianism, and Kantian ethics are followed by a criticism, from a feminist point of view, of the failings the theories have in common, described as distorted pictures of the persons who are the moral agents in the theories, the societies these people inhabit, and the understanding of rationality the theories presuppose. It then explains why the theories can’t well be employed to address these failings.


2020 ◽  
pp. 1556-1572
Author(s):  
Jordi Vallverdú ◽  
Toyoaki Nishida ◽  
Yoshisama Ohmoto ◽  
Stuart Moran ◽  
Sarah Lázare

Empathy is a basic emotion trigger for human beings, especially while regulating social relationships and behaviour. The main challenge of this paper is study whether people's empathic reactions towards robots change depending on previous information given to human about the robot before the interaction. The use of false data about robot skills creates different levels of what we call ‘fake empathy'. This study performs an experiment in WOZ environment in which different subjects (n=17) interacting with the same robot while they believe that the robot is a different robot, up to three versions. Each robot scenario provides a different ‘humanoid' description, and out hypothesis is that the more human-like looks the robot, the more empathically can be the human responses. Results were obtained from questionnaires and multi- angle video recordings. Positive results reinforce the strength of our hypothesis, although we recommend a new and bigger and then more robust experiment.


Author(s):  
Silviya Serafimova

Abstract Moral implications of the decision-making process based on algorithms require special attention within the field of machine ethics. Specifically, research focuses on clarifying why even if one assumes the existence of well-working ethical intelligent agents in epistemic terms, it does not necessarily mean that they meet the requirements of autonomous moral agents, such as human beings. For the purposes of exemplifying some of the difficulties in arguing for implicit and explicit ethical agents in Moor’s sense, three first-order normative theories in the field of machine ethics are put to test. Those are Powers’ prospect for a Kantian machine, Anderson and Anderson’s reinterpretation of act utilitarianism and Howard and Muntean’s prospect for a moral machine based on a virtue ethical approach. By comparing and contrasting the three first-order normative theories, and by clarifying the gist of the differences between the processes of calculation and moral estimation, the possibility for building what—one might call strong “moral” AI scenarios—is questioned. The possibility of weak “moral” AI scenarios is likewise discussed critically.


2018 ◽  
Vol 14 (1) ◽  
pp. 44-59 ◽  
Author(s):  
Jordi Vallverdú ◽  
Toyoaki Nishida ◽  
Yoshisama Ohmoto ◽  
Stuart Moran ◽  
Sarah Lázare

Empathy is a basic emotion trigger for human beings, especially while regulating social relationships and behaviour. The main challenge of this paper is study whether people's empathic reactions towards robots change depending on previous information given to human about the robot before the interaction. The use of false data about robot skills creates different levels of what we call ‘fake empathy'. This study performs an experiment in WOZ environment in which different subjects (n=17) interacting with the same robot while they believe that the robot is a different robot, up to three versions. Each robot scenario provides a different ‘humanoid' description, and out hypothesis is that the more human-like looks the robot, the more empathically can be the human responses. Results were obtained from questionnaires and multi- angle video recordings. Positive results reinforce the strength of our hypothesis, although we recommend a new and bigger and then more robust experiment.


Sign in / Sign up

Export Citation Format

Share Document