scholarly journals The Human Takes It All: Humanlike Synthesized Voices Are Perceived as Less Eerie and More Likable. Evidence From a Subjective Ratings Study

2020 ◽  
Vol 14 ◽  
Author(s):  
Katharina Kühne ◽  
Martin H. Fischer ◽  
Yuefang Zhou

Background: The increasing involvement of social robots in human lives raises the question as to how humans perceive social robots. Little is known about human perception of synthesized voices.Aim: To investigate which synthesized voice parameters predict the speaker's eeriness and voice likability; to determine if individual listener characteristics (e.g., personality, attitude toward robots, age) influence synthesized voice evaluations; and to explore which paralinguistic features subjectively distinguish humans from robots/artificial agents.Methods: 95 adults (62 females) listened to randomly presented audio-clips of three categories: synthesized (Watson, IBM), humanoid (robot Sophia, Hanson Robotics), and human voices (five clips/category). Voices were rated on intelligibility, prosody, trustworthiness, confidence, enthusiasm, pleasantness, human-likeness, likability, and naturalness. Speakers were rated on appeal, credibility, human-likeness, and eeriness. Participants' personality traits, attitudes to robots, and demographics were obtained.Results: The human voice and human speaker characteristics received reliably higher scores on all dimensions except for eeriness. Synthesized voice ratings were positively related to participants' agreeableness and neuroticism. Females rated synthesized voices more positively on most dimensions. Surprisingly, interest in social robots and attitudes toward robots played almost no role in voice evaluation. Contrary to the expectations of an uncanny valley, when the ratings of human-likeness for both the voice and the speaker characteristics were higher, they seemed less eerie to the participants. Moreover, when the speaker's voice was more humanlike, it was more liked by the participants. This latter point was only applicable to one of the synthesized voices. Finally, pleasantness and trustworthiness of the synthesized voice predicted the likability of the speaker's voice. Qualitative content analysis identified intonation, sound, emotion, and imageability/embodiment as diagnostic features.Discussion: Humans clearly prefer human voices, but manipulating diagnostic speech features might increase acceptance of synthesized voices and thereby support human-robot interaction. There is limited evidence that human-likeness of a voice is negatively linked to the perceived eeriness of the speaker.

Electronics ◽  
2020 ◽  
Vol 9 (2) ◽  
pp. 267
Author(s):  
Fernando Alonso Martin ◽  
María Malfaz ◽  
Álvaro Castro-González ◽  
José Carlos Castillo ◽  
Miguel Ángel Salichs

The success of social robotics is directly linked to their ability of interacting with people. Humans possess verbal and non-verbal communication skills, and, therefore, both are essential for social robots to get a natural human–robot interaction. This work focuses on the first of them since the majority of social robots implement an interaction system endowed with verbal capacities. In order to do this implementation, we must equip social robots with an artificial voice system. In robotics, a Text to Speech (TTS) system is the most common speech synthesizer technique. The performance of a speech synthesizer is mainly evaluated by its similarity to the human voice in relation to its intelligibility and expressiveness. In this paper, we present a comparative study of eight off-the-shelf TTS systems used in social robots. In order to carry out the study, 125 participants evaluated the performance of the following TTS systems: Google, Microsoft, Ivona, Loquendo, Espeak, Pico, AT&T, and Nuance. The evaluation was performed after observing videos where a social robot communicates verbally using one TTS system. The participants completed a questionnaire to rate each TTS system in relation to four features: intelligibility, expressiveness, artificiality, and suitability. In this study, four research questions were posed to determine whether it is possible to present a ranking of TTS systems in relation to each evaluated feature, or, on the contrary, there are no significant differences between them. Our study shows that participants found differences between the TTS systems evaluated in terms of intelligibility, expressiveness, and artificiality. The experiments also indicated that there was a relationship between the physical appearance of the robots (embodiment) and the suitability of TTS systems.


AI & Society ◽  
2021 ◽  
Author(s):  
Dafna Burema

AbstractThis paper argues that there is a need to critically assess bias in the representations of older adults in the field of Human–Robot Interaction. This need stems from the recognition that technology development is a socially constructed process that has the potential to reinforce problematic understandings of older adults. Based on a qualitative content analysis of 96 academic publications, this paper indicates that older adults are represented as; frail by default, independent by effort; silent and technologically illiterate; burdensome; and problematic for society. Within these documents, few counternarratives are present that do not take such essentialist representations. In these texts, the goal of social robots in elder care is to “enable” older adults to “better” themselves. The older body is seen as “fixable” with social robots, reinforcing an ageist and neoliberal narrative: older adults are reduced to potential care receivers in ways that shift care responsibilities away from the welfare state onto the individual.


Author(s):  
Vignesh Prasad ◽  
Ruth Stock-Homburg ◽  
Jan Peters

AbstractFor some years now, the use of social, anthropomorphic robots in various situations has been on the rise. These are robots developed to interact with humans and are equipped with corresponding extremities. They already support human users in various industries, such as retail, gastronomy, hotels, education and healthcare. During such Human-Robot Interaction (HRI) scenarios, physical touch plays a central role in the various applications of social robots as interactive non-verbal behaviour is a key factor in making the interaction more natural. Shaking hands is a simple, natural interaction used commonly in many social contexts and is seen as a symbol of greeting, farewell and congratulations. In this paper, we take a look at the existing state of Human-Robot Handshaking research, categorise the works based on their focus areas, draw out the major findings of these areas while analysing their pitfalls. We mainly see that some form of synchronisation exists during the different phases of the interaction. In addition to this, we also find that additional factors like gaze, voice facial expressions etc. can affect the perception of a robotic handshake and that internal factors like personality and mood can affect the way in which handshaking behaviours are executed by humans. Based on the findings and insights, we finally discuss possible ways forward for research on such physically interactive behaviours.


Author(s):  
Ruth Stock-Homburg

AbstractKnowledge production within the interdisciplinary field of human–robot interaction (HRI) with social robots has accelerated, despite the continued fragmentation of the research domain. Together, these features make it hard to remain at the forefront of research or assess the collective evidence pertaining to specific areas, such as the role of emotions in HRI. This systematic review of state-of-the-art research into humans’ recognition and responses to artificial emotions of social robots during HRI encompasses the years 2000–2020. In accordance with a stimulus–organism–response framework, the review advances robotic psychology by revealing current knowledge about (1) the generation of artificial robotic emotions (stimulus), (2) human recognition of robotic artificial emotions (organism), and (3) human responses to robotic emotions (response), as well as (4) other contingencies that affect emotions as moderators.


2018 ◽  
Vol 9 (1) ◽  
pp. 168-182 ◽  
Author(s):  
Mina Marmpena ◽  
Angelica Lim ◽  
Torbjørn S. Dahl

Abstract Human-robot interaction in social robotics applications could be greatly enhanced by robotic behaviors that incorporate emotional body language. Using as our starting point a set of pre-designed, emotion conveying animations that have been created by professional animators for the Pepper robot, we seek to explore how humans perceive their affect content, and to increase their usability by annotating them with reliable labels of valence and arousal, in a continuous interval space. We conducted an experiment with 20 participants who were presented with the animations and rated them in the two-dimensional affect space. An inter-rater reliability analysis was applied to support the aggregation of the ratings for deriving the final labels. The set of emotional body language animations with the labels of valence and arousal is available and can potentially be useful to other researchers as a ground truth for behavioral experiments on robotic expression of emotion, or for the automatic selection of robotic emotional behaviors with respect to valence and arousal. To further utilize the data we collected, we analyzed it with an exploratory approach and we present some interesting trends with regard to the human perception of Pepper’s emotional body language, that might be worth further investigation.


Author(s):  
Matthias Scheutz ◽  
Paul Schermerhorn

Effective decision-making under real-world conditions can be very difficult as purely rational methods of decision-making are often not feasible or applicable. Psychologists have long hypothesized that humans are able to cope with time and resource limitations by employing affective evaluations rather than rational ones. In this chapter, we present the distributed integrated affect cognition and reflection architecture DIARC for social robots intended for natural human-robot interaction and demonstrate the utility of its human-inspired affect mechanisms for the selection of tasks and goals. Specifically, we show that DIARC incorporates affect mechanisms throughout the architecture, which are based on “evaluation signals” generated in each architectural component to obtain quick and efficient estimates of the state of the component, and illustrate the operation and utility of these mechanisms with examples from human-robot interaction experiments.


2020 ◽  
Vol 32 (1) ◽  
pp. 7-7
Author(s):  
Masahiro Shiomi ◽  
Hidenobu Sumioka ◽  
Hiroshi Ishiguro

As social robot research is advancing, the interaction distance between people and robots is decreasing. Indeed, although we were once required to maintain a certain physical distance from traditional industrial robots for safety, we can now interact with social robots in such a close distance that we can touch them. The physical existence of social robots will be essential to realize natural and acceptable interactions with people in daily environments. Because social robots function in our daily environments, we must design scenarios where robots interact closely with humans by considering various viewpoints. Interactions that involve touching robots influence the changes in the behavior of a person strongly. Therefore, robotics researchers and developers need to design such scenarios carefully. Based on these considerations, this special issue focuses on close human-robot interactions. This special issue on “Human-Robot Interaction in Close Distance” includes a review paper and 11 other interesting papers covering various topics such as social touch interactions, non-verbal behavior design for touch interactions, child-robot interactions including physical contact, conversations with physical interactions, motion copying systems, and mobile human-robot interactions. We thank all the authors and reviewers of the papers and hope this special issue will help readers better understand human-robot interaction in close distance.


Author(s):  
Gianpaolo Gulletta ◽  
Wolfram Erlhagen ◽  
Estela Bicho

In the last decade, the objectives outlined by the needs of personal robotics have led to the rise of new biologically-inspired techniques for arm motion planning. This paper presents a literature review of the most recent research on the generation of human-like arm movements in humanoid and manipulation robotic systems. Search methods and inclusion criteria are described. The studies are analysed taking into consideration the sources of publication, the experimental settings, the type of movements, the technical approach, and the human motor principles that have been used to inspire and assess human-likeness. Results show that there is a strong focus on the generation of single-arm reaching movements and biomimetic-based methods. However, there has been poor attention to manipulation, obstacle-avoidance mechanisms, and dual-arm motion generation. For these reasons, human-like arm motion generation may not fully respect human behavioural and neurological key features and may result restricted to specific tasks of human-robot interaction. Limitations and challenges are discussed to provide meaningful directions for future investigations.


Author(s):  
Aike C. Horstmann ◽  
Nicole C. Krämer

AbstractSince social robots are rapidly advancing and thus increasingly entering people’s everyday environments, interactions with robots also progress. For these interactions to be designed and executed successfully, this study considers insights of attribution theory to explore the circumstances under which people attribute responsibility for the robot’s actions to the robot. In an experimental online study with a 2 × 2 × 2 between-subjects design (N = 394), people read a vignette describing the social robot Pepper either as an assistant or a competitor and its feedback, which was either positive or negative during a subsequently executed quiz, to be generated autonomously by the robot or to be pre-programmed by programmers. Results showed that feedback believed to be autonomous leads to more attributed agency, responsibility, and competence to the robot than feedback believed to be pre-programmed. Moreover, the more agency is ascribed to the robot, the better the evaluation of its sociability and the interaction with it. However, only the valence of the feedback affects the evaluation of the robot’s sociability and the interaction with it directly, which points to the occurrence of a fundamental attribution error.


Sign in / Sign up

Export Citation Format

Share Document