scholarly journals The Ethical Significance of Human Likeness in Robotics and AI

2019 ◽  
Vol 10 (2) ◽  
pp. 52-67 ◽  
Author(s):  
Peter Remmers

A defining goal of research in AI and robotics is to build technical artefacts as substitutes, assistants or enhancements of human action and decision-making. But both in reflection on these technologies and in interaction with the respective technical artefacts, we sometimes encounter certain kinds of human likenesses. To clarify their significance, three aspects are highlighted. First, I will broadly investigate some relations between humans and artificial agents by recalling certain points from the debates on Strong AI, on Turing’s Test, on the concept of autonomy and on anthropomorphism in human-machine interaction. Second, I will argue for the claim that there are no serious ethical issues involved in the theoretical aspects of technological human likeness. Third, I will suggest that although human likeness may not be ethically significant on the philosophical and conceptual levels, strategies to use anthropomorphism in the technological design of human-machine collaborations are ethically significant, because artificial agents are specifically designed to be treated in ways we usually treat humans.

Author(s):  
Huma Shah ◽  
Kevin Warwick

Trust is an expected certainty in order to transact confidently. However, how accurate is our decision-making in human-machine interaction? In this chapter we present evidence from experimental conditions in which human interrogators used their judgement of what constitutes a satisfactory response trusting a hidden interlocutor was human when it was actually a machine. A simultaneous comparison Turing test is presented with conversation between a human judge and two hidden entities during Turing100 at Bletchley Park, UK. Results of post-test conversational analysis by the audience at Turing Education Day show more than 30% made the same identification errors as the Turing test judge. Trust is found to be misplaced in subjective certainty that could lead to susceptibility to deception in cyberspace.


Author(s):  
Reyhan Aydoğan ◽  
Tim Baarslag ◽  
Enrico Gerding

AbstractConflict resolution is essential to obtain cooperation in many scenarios such as politics and business, as well as our day to day life. The importance of conflict resolution has driven research in many fields like anthropology, social science, psychology, mathematics, biology and, more recently, in artificial intelligence. Computer science and artificial intelligence have, in turn, been inspired by theories and techniques from these disciplines, which has led to a variety of computational models and approaches, such as automated negotiation, group decision making, argumentation, preference aggregation, and human-machine interaction. To bring together the different research strands and disciplines in conflict resolution, the Workshop on Conflict Resolution in Decision Making (COREDEMA) was organized. This special issue benefited from the workshop series, and consists of significantly extended and revised selected papers from the ECAI 2016 COREDEMA workshop, as well as completely new contributions.


Author(s):  
Huma Shah ◽  
Kevin Warwick

Trust is an expected certainty in order to transact confidently. However, how accurate is our decision-making in human-machine interaction? In this chapter, the present evidence from experimental conditions in which human interrogators used their judgement of what constitutes a satisfactory response trusting a hidden interlocutor was human when it was actually a machine. A simultaneous comparison Turing test is presented with conversation between a human judge and two hidden entities during Turing100 at Bletchley Park, UK. Results of post-test conversational analysis by the audience at Turing Education Day show more than 30% made the same identification errors as the Turing test judge. Trust is found to be misplaced in subjective certainty that could lead to susceptibility to deception in cyberspace.


2020 ◽  
Vol 07 (01) ◽  
pp. 15-24
Author(s):  
Paul Bello ◽  
Will Bridewell

If artificial agents are to be created such that they occupy space in our social and cultural milieu, then we should expect them to be targets of folk psychological explanation. That is to say, their behavior ought to be explicable in terms of beliefs, desires, obligations, and especially intentions. Herein, we focus on the concept of intentional action, and especially its relationship to consciousness. After outlining some lessons learned from philosophy and psychology that give insight into the structure of intentional action, we find that attention plays a critical role in agency, and indeed, in the production of intentional action. We argue that the insights offered by the literature on agency and intentional action motivate a particular kind of computational cognitive architecture, and one that hasn’t been well-explicated or computationally fleshed out among the community of AI researchers and computational cognitive scientists who work on cognitive systems. To give a sense of what such a system might look like, we present the ARCADIA attention-driven cognitive system as first steps toward an architecture to support the type of agency that rich human–machine interaction will undoubtedly demand.


Author(s):  
Eva Wiese ◽  
Tyler Shaw ◽  
Daniel Lofaro ◽  
Carryl Baldwin

When we interact with others, we make inferences about their internal states (i.e., intentions, emotions) and use this information to understand and predict their behavior. Reasoning about the internal states of others is referred to as mentalizing, and presupposes that our social partners are believed to have a mind. Seeing mind in others increases trust, prosocial behaviors and feelings of social connection, and leads to improved joint performance. However, while human agents trigger mind perception by default, artificial agents are not automatically treated as intentional entities but need to be designed to do so. The panel addresses this issue by discussing how mind attribution to robots and other automated agents can be elicited by design, what the effects of mind perception are on attitudes and performance in human-robot and human-machine interaction and what behavioral and neuroscientific paradigms can be used to investigate these questions. Application areas covered include social robotics, automation, driver-vehicle interfaces, and others.


Sign in / Sign up

Export Citation Format

Share Document