Better alone than in bad company

2019 ◽  
Vol 20 (3) ◽  
pp. 487-508 ◽  
Author(s):  
Silvia Rossi ◽  
Martina Ruocco

Abstract Using artificial emotions helps in making human-robot interaction more personalised, natural, and so more likeable. In the case of humanoid robots with constrained facial expression, the literature concentrates on the expression of emotions by using other nonverbal interaction channels. When using multi-modal communication, indeed, it is important to understand the effect of the combination of such non-verbal cues, while the majority of the works addressed only the role of single channels in the human recognition performance. Here, we present an attempt to analyse the effect of the combination of different animations expressing the same emotion or different ones. Results show that when an emotion is successfully expressed using a single channel, the combination of this channel with other animations, that may have lower recognition rates, appears to be less communicative.

Author(s):  
Ruth Stock-Homburg

AbstractKnowledge production within the interdisciplinary field of human–robot interaction (HRI) with social robots has accelerated, despite the continued fragmentation of the research domain. Together, these features make it hard to remain at the forefront of research or assess the collective evidence pertaining to specific areas, such as the role of emotions in HRI. This systematic review of state-of-the-art research into humans’ recognition and responses to artificial emotions of social robots during HRI encompasses the years 2000–2020. In accordance with a stimulus–organism–response framework, the review advances robotic psychology by revealing current knowledge about (1) the generation of artificial robotic emotions (stimulus), (2) human recognition of robotic artificial emotions (organism), and (3) human responses to robotic emotions (response), as well as (4) other contingencies that affect emotions as moderators.


2021 ◽  
Author(s):  
Serena Marchesi ◽  
Nicolas Spatola ◽  
Agnieszka Wykowska

Evidence from cognitive psychology showed that cultural differences influence human social cognition, leading to a different activation of social cognitive mechanisms. A growing corpus of literature in Human-Robot Interaction is investigating how culture shapes cognitive processes like anthropomorphism or mind attribution when humans face artificial agents, such as robots. The present paper aims at disentangling the relationship between cultural values, anthropomorphism, and intentionality attribution to robots, in the context of the intentional stance theory. We administered a battery of tests to 600 participants from various nations worldwide and modeled our data with a path model. Results showed a consistent direct influence of collectivism on anthropomorphism but not on the adoption of the intentional stance. Therefore, we further explored this result with a mediation analysis that revealed anthropomorphism as a true mediator between collectivism and the adoption of the intentional stance. We conclude that our findings extend previous literature by showing that the adoption of the intentional stance towards humanoid robots depends on anthropomorphic attribution in the context of cultural values.


2018 ◽  
Vol 161 ◽  
pp. 01001 ◽  
Author(s):  
Karsten Berns ◽  
Zuhair Zafar

Human-machine interaction is a major challenge in the development of complex humanoid robots. In addition to verbal communication the use of non-verbal cues such as hand, arm and body gestures or mimics can improve the understanding of the intention of the robot. On the other hand, by perceiving such mechanisms of a human in a typical interaction scenario the humanoid robot can adapt its interaction skills in a better way. In this work, the perception system of two social robots, ROMAN and ROBIN of the RRLAB of the TU Kaiserslautern, is presented in the range of human-robot interaction.


2009 ◽  
Vol 6 (3-4) ◽  
pp. 369-397 ◽  
Author(s):  
Kerstin Dautenhahn ◽  
Chrystopher L. Nehaniv ◽  
Michael L. Walters ◽  
Ben Robins ◽  
Hatice Kose-Bagci ◽  
...  

This paper provides a comprehensive introduction to the design of the minimally expressive robot KASPAR, which is particularly suitable for human–robot interaction studies. A low-cost design with off-the-shelf components has been used in a novel design inspired from a multi-disciplinary viewpoint, including comics design and Japanese Noh theatre. The design rationale of the robot and its technical features are described in detail. Three research studies will be presented that have been using KASPAR extensively. Firstly, we present its application in robot-assisted play and therapy for children with autism. Secondly, we illustrate its use in human–robot interaction studies investigating the role of interaction kinesics and gestures. Lastly, we describe a study in the field of developmental robotics into computational architectures based on interaction histories for robot ontogeny. The three areas differ in the way as to how the robot is being operated and its role in social interaction scenarios. Each will be introduced briefly and examples of the results will be presented. Reflections on the specific design features of KASPAR that were important in these studies and lessons learnt from these studies concerning the design of humanoid robots for social interaction will also be discussed. An assessment of the robot in terms of utility of the design for human–robot interaction experiments concludes the paper.


2014 ◽  
Vol 11 (01) ◽  
pp. 1450003 ◽  
Author(s):  
Hatice Kose ◽  
Neziha Akalin ◽  
Pinar Uluer

This paper investigates the role of interaction and communication kinesics in human–robot interaction. This study is part of a novel research project on sign language (SL) tutoring through interaction games with humanoid robots. The main goal is to motivate the children with communication problems to understand and imitate the signs implemented by the robot using basic upper torso gestures and sound. We present an empirical and exploratory study investigating the effect of basic nonverbal gestures consisting of hand movements, body and face gestures expressed by a humanoid robot, and having comprehended the word, the participants will give relevant feedback in SL. This way the participant is both a passive observer and an active imitator throughout the learning process in different phases of the game. A five-fingered R3 robot platform and a three-fingered Nao H-25 robot are employed within the games. Vision-, sound-, touch- and motion-based cues are used for multimodal communication between the robot, child and therapist/parent within the study. This paper presents the preliminary results of the proposed game tested with adult participants. The aim is to evaluate the SL learning ability of participants from a robot, and compare different robot platforms within this setup.


2019 ◽  
Author(s):  
Kyveli Kompatsiari ◽  
Francesca Ciardo ◽  
Vadim Tikhanoff ◽  
Giorgio Metta ◽  
Agnieszka Wykowska

This paper reports a study where we examined how a humanoid robot was evaluated by users, dependent on established eye contact. In two experiments, we manipulated how the robot gazes, namely either by looking at the subjects’ eyes (mutual gaze) or to a socially neutral position (neutral). Across the two experiments, we altered the level of predictiveness of the robot’s gaze direction with respect to a subsequent target stimulus (in Exp.1 the direction was non-predictive, in Exp. 2 the gaze direction was counter-predictive). Results of subjective reports showed that participants were sensitive to eye contact. Moreover, participants were more engaged and ascribed higher intentionality to the robot in the mutual gaze condition relative to the neutral condition. This was independent of predictiveness of the gaze cue. Our results suggest that embodied humanoid robots can establish eye contact, which in turn has a positive impact on perceived socialness of the robot, and on the quality of human-robot interaction (HRI). Therefore, establishing mutual gaze should be considered in design of robot behaviors for social HRI.


2020 ◽  
Vol 295 (28) ◽  
pp. 9421-9432
Author(s):  
Hannadige Sasimali Madusanka Soysa ◽  
Anuwat Aunkham ◽  
Albert Schulte ◽  
Wipa Suginta

Vibrio cholerae is a Gram-negative, facultative anaerobic bacterial species that causes serious disease and can grow on various carbon sources, including chitin polysaccharides. In saltwater, its attachment to chitin surfaces not only serves as the initial step of nutrient recruitment but is also a crucial mechanism underlying cholera epidemics. In this study, we report the first characterization of a chitooligosaccharide-specific chitoporin, VcChiP, from the cell envelope of the V. cholerae type strain O1. We modeled the structure of VcChiP, revealing a trimeric cylinder that forms single channels in phospholipid bilayers. The membrane-reconstituted VcChiP channel was highly dynamic and voltage induced. Substate openings O1′, O2′, and O3′, between the fully open states O1, O2, and O3, were polarity selective, with nonohmic conductance profiles. Results of liposome-swelling assays suggested that VcChiP can transport monosaccharides, as well as chitooligosaccharides, but not other oligosaccharides. Of note, an outer-membrane porin (omp)-deficient strain of Escherichia coli expressing heterologous VcChiP could grow on M9 minimal medium supplemented with small chitooligosaccharides. These results support a crucial role of chitoporin in the adaptive survival of bacteria on chitinous nutrients. Our findings also suggest a promising means of vaccine development based on surface-exposed outer-membrane proteins and the design of novel anticholera agents based on chitooligosaccharide-mimicking analogs.


2020 ◽  
Author(s):  
Agnieszka Wykowska ◽  
Jairo Pérez-Osorio ◽  
Stefan Kopp

This booklet is a collection of the position statements accepted for the HRI’20 conference workshop “Social Cognition for HRI: Exploring the relationship between mindreading and social attunement in human-robot interaction” (Wykowska, Perez-Osorio & Kopp, 2020). Unfortunately, due to the rapid unfolding of the novel coronavirus at the beginning of the present year, the conference and consequently our workshop, were canceled. On the light of these events, we decided to put together the positions statements accepted for the workshop. The contributions collected in these pages highlight the role of attribution of mental states to artificial agents in human-robot interaction, and precisely the quality and presence of social attunement mechanisms that are known to make human interaction smooth, efficient, and robust. These papers also accentuate the importance of the multidisciplinary approach to advance the understanding of the factors and the consequences of social interactions with artificial agents.


Author(s):  
Giorgio Metta

This chapter outlines a number of research lines that, starting from the observation of nature, attempt to mimic human behavior in humanoid robots. Humanoid robotics is one of the most exciting proving grounds for the development of biologically inspired hardware and software—machines that try to recreate billions of years of evolution with some of the abilities and characteristics of living beings. Humanoids could be especially useful for their ability to “live” in human-populated environments, occupying the same physical space as people and using tools that have been designed for people. Natural human–robot interaction is also an important facet of humanoid research. Finally, learning and adapting from experience, the hallmark of human intelligence, may require some approximation to the human body in order to attain similar capacities to humans. This chapter focuses particularly on compliant actuation, soft robotics, biomimetic robot vision, robot touch, and brain-inspired motor control in the context of the iCub humanoid robot.


2021 ◽  
Vol 13 (2) ◽  
pp. 32
Author(s):  
Diego Reforgiato Recupero

In this paper we present a mixture of technologies tailored for e-learning related to the Deep Learning, Sentiment Analysis, and Semantic Web domains, which we have employed to show four different use cases that we have validated in the field of Human-Robot Interaction. The approach has been designed using Zora, a humanoid robot that can be easily extended with new software behaviors. The goal is to make the robot able to engage users through natural language for different tasks. Using our software the robot can (i) talk to the user and understand their sentiments through a dedicated Semantic Sentiment Analysis engine; (ii) answer to open-dialog natural language utterances by means of a Generative Conversational Agent; (iii) perform action commands leveraging a defined Robot Action ontology and open-dialog natural language utterances; and (iv) detect which objects the user is handing by using convolutional neural networks trained on a huge collection of annotated objects. Each module can be extended with more data and information and the overall architectural design is general, flexible, and scalable and can be expanded with other components, thus enriching the interaction with the human. Different applications within the e-learning domains are foreseen: The robot can either be a trainer and autonomously perform physical actions (e.g., in rehabilitation centers) or it can interact with the users (performing simple tests or even identifying emotions) according to the program developed by the teachers.


Sign in / Sign up

Export Citation Format

Share Document