scholarly journals A critical analysis of the representations of older adults in the field of human–robot interaction

AI & Society ◽  
2021 ◽  
Author(s):  
Dafna Burema

AbstractThis paper argues that there is a need to critically assess bias in the representations of older adults in the field of Human–Robot Interaction. This need stems from the recognition that technology development is a socially constructed process that has the potential to reinforce problematic understandings of older adults. Based on a qualitative content analysis of 96 academic publications, this paper indicates that older adults are represented as; frail by default, independent by effort; silent and technologically illiterate; burdensome; and problematic for society. Within these documents, few counternarratives are present that do not take such essentialist representations. In these texts, the goal of social robots in elder care is to “enable” older adults to “better” themselves. The older body is seen as “fixable” with social robots, reinforcing an ageist and neoliberal narrative: older adults are reduced to potential care receivers in ways that shift care responsibilities away from the welfare state onto the individual.

2021 ◽  
Vol 11 (21) ◽  
pp. 10136
Author(s):  
Anouk van Maris ◽  
Nancy Zook ◽  
Sanja Dogramadzi ◽  
Matthew Studley ◽  
Alan Winfield ◽  
...  

This work explored the use of human–robot interaction research to investigate robot ethics. A longitudinal human–robot interaction study was conducted with self-reported healthy older adults to determine whether expression of artificial emotions by a social robot could result in emotional deception and emotional attachment. The findings from this study have highlighted that currently there appears to be no adequate tools, or the means, to determine the ethical impact and concerns ensuing from long-term interactions between social robots and older adults. This raises the question whether we should continue the fundamental development of social robots if we cannot determine their potential negative impact and whether we should shift our focus to the development of human–robot interaction assessment tools that provide more objective measures of ethical impact.


2020 ◽  
Vol 14 ◽  
Author(s):  
Katharina Kühne ◽  
Martin H. Fischer ◽  
Yuefang Zhou

Background: The increasing involvement of social robots in human lives raises the question as to how humans perceive social robots. Little is known about human perception of synthesized voices.Aim: To investigate which synthesized voice parameters predict the speaker's eeriness and voice likability; to determine if individual listener characteristics (e.g., personality, attitude toward robots, age) influence synthesized voice evaluations; and to explore which paralinguistic features subjectively distinguish humans from robots/artificial agents.Methods: 95 adults (62 females) listened to randomly presented audio-clips of three categories: synthesized (Watson, IBM), humanoid (robot Sophia, Hanson Robotics), and human voices (five clips/category). Voices were rated on intelligibility, prosody, trustworthiness, confidence, enthusiasm, pleasantness, human-likeness, likability, and naturalness. Speakers were rated on appeal, credibility, human-likeness, and eeriness. Participants' personality traits, attitudes to robots, and demographics were obtained.Results: The human voice and human speaker characteristics received reliably higher scores on all dimensions except for eeriness. Synthesized voice ratings were positively related to participants' agreeableness and neuroticism. Females rated synthesized voices more positively on most dimensions. Surprisingly, interest in social robots and attitudes toward robots played almost no role in voice evaluation. Contrary to the expectations of an uncanny valley, when the ratings of human-likeness for both the voice and the speaker characteristics were higher, they seemed less eerie to the participants. Moreover, when the speaker's voice was more humanlike, it was more liked by the participants. This latter point was only applicable to one of the synthesized voices. Finally, pleasantness and trustworthiness of the synthesized voice predicted the likability of the speaker's voice. Qualitative content analysis identified intonation, sound, emotion, and imageability/embodiment as diagnostic features.Discussion: Humans clearly prefer human voices, but manipulating diagnostic speech features might increase acceptance of synthesized voices and thereby support human-robot interaction. There is limited evidence that human-likeness of a voice is negatively linked to the perceived eeriness of the speaker.


Author(s):  
Mark Coeckelbergh

Abstract Social robots are designed to facilitate interaction with humans through “social” behavior. As literature in the field of human–robot interaction shows, this sometimes leads to “bad” behavior towards the robot or “abuse” of the robot. Virtue ethics offers a helpful way to capture the intuition that although nobody is harmed when a robot is “mistreated”, there is still something wrong with this kind of behavior: it damages the moral character of the person engaging in that behavior, especially when it is habitual. However, one of the limitations of current applications of virtue ethics to robots and technology is its focus on the individual and individual behavior and insufficient attention to temporal and bodily aspects of virtue. After positioning its project in relation to the work of Shannon Vallor and Robert Sparrow, the present paper explores what it would mean to interpret and apply virtue ethics in a more social and relational way and a way that takes into account the link between virtue and the body. In particular, it proposes (1) to use the notion of practice as a way to conceptualize how the individual behavior, the virtue of the person, and the technology in question are related to their wider social-practical context and history, and (2) to use the notions of habit and performance conceptualize the incorporation and performance of virtue. This involves use of the work of MacIntyre, but revised by drawing on Bourdieu’s notion of habit in order to highlight the temporal, embodiment, and performative aspect of virtue. The paper then shows what this means for thinking about the moral standing of social robots, for example for the ethics of sex robots and for evaluating abusive behaviors such as kicking robots. The paper concludes that this approach does not only give us a better account of what happens when people behave “badly” towards social robots, but also suggests a more comprehensive virtue ethics of technology that is fully relational, performance-oriented, and able to not only acknowledges but also theorize the temporal and bodily dimension of virtue.


Author(s):  
Vignesh Prasad ◽  
Ruth Stock-Homburg ◽  
Jan Peters

AbstractFor some years now, the use of social, anthropomorphic robots in various situations has been on the rise. These are robots developed to interact with humans and are equipped with corresponding extremities. They already support human users in various industries, such as retail, gastronomy, hotels, education and healthcare. During such Human-Robot Interaction (HRI) scenarios, physical touch plays a central role in the various applications of social robots as interactive non-verbal behaviour is a key factor in making the interaction more natural. Shaking hands is a simple, natural interaction used commonly in many social contexts and is seen as a symbol of greeting, farewell and congratulations. In this paper, we take a look at the existing state of Human-Robot Handshaking research, categorise the works based on their focus areas, draw out the major findings of these areas while analysing their pitfalls. We mainly see that some form of synchronisation exists during the different phases of the interaction. In addition to this, we also find that additional factors like gaze, voice facial expressions etc. can affect the perception of a robotic handshake and that internal factors like personality and mood can affect the way in which handshaking behaviours are executed by humans. Based on the findings and insights, we finally discuss possible ways forward for research on such physically interactive behaviours.


Author(s):  
Ruth Stock-Homburg

AbstractKnowledge production within the interdisciplinary field of human–robot interaction (HRI) with social robots has accelerated, despite the continued fragmentation of the research domain. Together, these features make it hard to remain at the forefront of research or assess the collective evidence pertaining to specific areas, such as the role of emotions in HRI. This systematic review of state-of-the-art research into humans’ recognition and responses to artificial emotions of social robots during HRI encompasses the years 2000–2020. In accordance with a stimulus–organism–response framework, the review advances robotic psychology by revealing current knowledge about (1) the generation of artificial robotic emotions (stimulus), (2) human recognition of robotic artificial emotions (organism), and (3) human responses to robotic emotions (response), as well as (4) other contingencies that affect emotions as moderators.


Author(s):  
Matthias Scheutz ◽  
Paul Schermerhorn

Effective decision-making under real-world conditions can be very difficult as purely rational methods of decision-making are often not feasible or applicable. Psychologists have long hypothesized that humans are able to cope with time and resource limitations by employing affective evaluations rather than rational ones. In this chapter, we present the distributed integrated affect cognition and reflection architecture DIARC for social robots intended for natural human-robot interaction and demonstrate the utility of its human-inspired affect mechanisms for the selection of tasks and goals. Specifically, we show that DIARC incorporates affect mechanisms throughout the architecture, which are based on “evaluation signals” generated in each architectural component to obtain quick and efficient estimates of the state of the component, and illustrate the operation and utility of these mechanisms with examples from human-robot interaction experiments.


2020 ◽  
Vol 32 (1) ◽  
pp. 7-7
Author(s):  
Masahiro Shiomi ◽  
Hidenobu Sumioka ◽  
Hiroshi Ishiguro

As social robot research is advancing, the interaction distance between people and robots is decreasing. Indeed, although we were once required to maintain a certain physical distance from traditional industrial robots for safety, we can now interact with social robots in such a close distance that we can touch them. The physical existence of social robots will be essential to realize natural and acceptable interactions with people in daily environments. Because social robots function in our daily environments, we must design scenarios where robots interact closely with humans by considering various viewpoints. Interactions that involve touching robots influence the changes in the behavior of a person strongly. Therefore, robotics researchers and developers need to design such scenarios carefully. Based on these considerations, this special issue focuses on close human-robot interactions. This special issue on “Human-Robot Interaction in Close Distance” includes a review paper and 11 other interesting papers covering various topics such as social touch interactions, non-verbal behavior design for touch interactions, child-robot interactions including physical contact, conversations with physical interactions, motion copying systems, and mobile human-robot interactions. We thank all the authors and reviewers of the papers and hope this special issue will help readers better understand human-robot interaction in close distance.


Author(s):  
Aike C. Horstmann ◽  
Nicole C. Krämer

AbstractSince social robots are rapidly advancing and thus increasingly entering people’s everyday environments, interactions with robots also progress. For these interactions to be designed and executed successfully, this study considers insights of attribution theory to explore the circumstances under which people attribute responsibility for the robot’s actions to the robot. In an experimental online study with a 2 × 2 × 2 between-subjects design (N = 394), people read a vignette describing the social robot Pepper either as an assistant or a competitor and its feedback, which was either positive or negative during a subsequently executed quiz, to be generated autonomously by the robot or to be pre-programmed by programmers. Results showed that feedback believed to be autonomous leads to more attributed agency, responsibility, and competence to the robot than feedback believed to be pre-programmed. Moreover, the more agency is ascribed to the robot, the better the evaluation of its sociability and the interaction with it. However, only the valence of the feedback affects the evaluation of the robot’s sociability and the interaction with it directly, which points to the occurrence of a fundamental attribution error.


Sign in / Sign up

Export Citation Format

Share Document