scholarly journals Emotion Recognition for Human-Robot Interaction: Recent Advances and Future Perspectives

2020 ◽  
Vol 7 ◽  
Author(s):  
Matteo Spezialetti ◽  
Giuseppe Placidi ◽  
Silvia Rossi

A fascinating challenge in the field of human–robot interaction is the possibility to endow robots with emotional intelligence in order to make the interaction more intuitive, genuine, and natural. To achieve this, a critical point is the capability of the robot to infer and interpret human emotions. Emotion recognition has been widely explored in the broader fields of human–machine interaction and affective computing. Here, we report recent advances in emotion recognition, with particular regard to the human–robot interaction context. Our aim is to review the state of the art of currently adopted emotional models, interaction modalities, and classification strategies and offer our point of view on future developments and critical issues. We focus on facial expressions, body poses and kinematics, voice, brain activity, and peripheral physiological responses, also providing a list of available datasets containing data from these modalities.

2018 ◽  
Vol 161 ◽  
pp. 01001 ◽  
Author(s):  
Karsten Berns ◽  
Zuhair Zafar

Human-machine interaction is a major challenge in the development of complex humanoid robots. In addition to verbal communication the use of non-verbal cues such as hand, arm and body gestures or mimics can improve the understanding of the intention of the robot. On the other hand, by perceiving such mechanisms of a human in a typical interaction scenario the humanoid robot can adapt its interaction skills in a better way. In this work, the perception system of two social robots, ROMAN and ROBIN of the RRLAB of the TU Kaiserslautern, is presented in the range of human-robot interaction.


2019 ◽  
Vol 8 (1) ◽  
pp. 34-44
Author(s):  
Maike Klein

Within both popular media and (some) scientific contexts, affective and ‘emotional’ machines are assumed to already exist. The aim of this paper is to draw attention to some of the key conceptual and theoretical issues raised by the ostensible affectivity. My investigation starts with three robotic encounters: a robot arm, the first (according to media) ‘emotional’ robot, Pepper, and Mako, a robotic cat. To make sense of affectivity in these encounters, I discuss emotion theoretical implications for affectivity in human-machine-interaction. Which theories have been implemented in the creation of the encountered robots? Being aware that in any given robot, there is no strict implementation of one single emotion theory, I will focus on two commonly used emotion theories: Russell and Mehrabian’s Three-Factor Theory of Emotion (the computational models derived from that theory are known as PAD models) and Ekman’s Basic Emotion Theory. An alternative way to approach affectivity in artificial systems is the Relational Approach of Damiano et al. which emphasizes human-robot-interaction in social robotics. In considering this alternative I also raise questions about the possibility of affectivity in robot-robot-relations.


Author(s):  
Jutta Weber

Some people regard the personal mobile robot as a candidate for the next digital revolution as it might become a future ubiquitous tool and everyday partner of humans. This new “socio-emotional” robot is supposed to conduct dialogue, to develop social competencies and to support users in everyday life. In this chapter, I sketch out the epistemological, ontological and techno-material groundings of personal service robotics which is based on new models of human-machine interaction like caregiver-infant or pet-owner. I discuss the conversational paradigm in Human-Robot Interaction (HRI) with its problematic concepts of “pre-given” social mechanisms, uninformed users as well as its new understanding of sociality as service.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5976
Author(s):  
Inês Soares ◽  
Marcelo Petry ◽  
António Paulo Moreira

The world is living the fourth industrial revolution, marked by the increasing intelligence and automation of manufacturing systems. Nevertheless, there are types of tasks that are too complex or too expensive to be fully automated, it would be more efficient if the machines were able to work with the human, not only by sharing the same workspace but also as useful collaborators. A possible solution to that problem is on human–robot interaction systems, understanding the applications where they can be helpful to implement and what are the challenges they face. This work proposes the development of an industrial prototype of a human–machine interaction system through Augmented Reality, in which the objective is to enable an industrial operator without any programming experience to program a robot. The system itself is divided into two different parts: the tracking system, which records the operator’s hand movement, and the translator system, which writes the program to be sent to the robot that will execute the task. To demonstrate the concept, the user drew geometric figures, and the robot was able to replicate the operator’s path recorded.


2021 ◽  
Vol 12 ◽  
Author(s):  
Alexandra Weidemann ◽  
Nele Rußwinkel

To realize a successful and collaborative interaction between human and robots remains a big challenge. Emotional reactions of the user provide crucial information for a successful interaction. These reactions carry key factors to prevent errors and fatal bidirectional misunderstanding. In cases where human–machine interaction does not proceed as expected, negative emotions, like frustration, can arise. Therefore, it is important to identify frustration in a human–machine interaction and to investigate its impact on other influencing factors such as dominance, sense of control and task performance. This paper presents a study that investigates a close cooperative work situation between human and robot, and explore the influence frustration has on the interaction. The task for the participants was to hand over colored balls to two different robot systems (an anthropomorphic robot and a robotic arm). The robot systems had to throw the balls into appropriate baskets. The coordination between human and robot was controlled by various gestures and words by means of trial and error. Participants were divided into two groups, a frustration- (FRUST) and a no frustration- (NOFRUST) group. Frustration was induced by the behavior of the robotic systems which made errors during the ball handover. Subjective and objective methods were used. The sample size of participants was N = 30 and the study was conducted in a between-subject design. Results show clear differences in perceived frustration in the two condition groups and different behavioral interactions were shown by the participants. Furthermore, frustration has a negative influence on interaction factors such as dominance and sense of control. The study provides important information concerning the influence of frustration on human–robot interaction (HRI) for the requirements of a successful, natural, and social HRI. The results (qualitative and quantitative) are discussed in favor of how a successful und effortless interaction between human and robot can be realized and what relevant factors, like appearance of the robot and influence of frustration on sense of control, have to be regarded.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6438
Author(s):  
Chiara Filippini ◽  
David Perpetuini ◽  
Daniela Cardone ◽  
Arcangelo Merla

An intriguing challenge in the human–robot interaction field is the prospect of endowing robots with emotional intelligence to make the interaction more genuine, intuitive, and natural. A crucial aspect in achieving this goal is the robot’s capability to infer and interpret human emotions. Thanks to its design and open programming platform, the NAO humanoid robot is one of the most widely used agents for human interaction. As with person-to-person communication, facial expressions are the privileged channel for recognizing the interlocutor’s emotional expressions. Although NAO is equipped with a facial expression recognition module, specific use cases may require additional features and affective computing capabilities that are not currently available. This study proposes a highly accurate convolutional-neural-network-based facial expression recognition model that is able to further enhance the NAO robot’ awareness of human facial expressions and provide the robot with an interlocutor’s arousal level detection capability. Indeed, the model tested during human–robot interactions was 91% and 90% accurate in recognizing happy and sad facial expressions, respectively; 75% accurate in recognizing surprised and scared expressions; and less accurate in recognizing neutral and angry expressions. Finally, the model was successfully integrated into the NAO SDK, thus allowing for high-performing facial expression classification with an inference time of 0.34 ± 0.04 s.


2019 ◽  
Vol 374 (1771) ◽  
pp. 20180033 ◽  
Author(s):  
Birgit Rauchbauer ◽  
Bruno Nazarian ◽  
Morgane Bourhis ◽  
Magalie Ochs ◽  
Laurent Prévot ◽  
...  

We present a novel functional magnetic resonance imaging paradigm for second-person neuroscience. The paradigm compares a human social interaction (human–human interaction, HHI) to an interaction with a conversational robot (human–robot interaction, HRI). The social interaction consists of 1 min blocks of live bidirectional discussion between the scanned participant and the human or robot agent. A final sample of 21 participants is included in the corpus comprising physiological (blood oxygen level-dependent, respiration and peripheral blood flow) and behavioural (recorded speech from all interlocutors, eye tracking from the scanned participant, face recording of the human and robot agents) data. Here, we present the first analysis of this corpus, contrasting neural activity between HHI and HRI. We hypothesized that independently of differences in behaviour between interactions with the human and robot agent, neural markers of mentalizing (temporoparietal junction (TPJ) and medial prefrontal cortex) and social motivation (hypothalamus and amygdala) would only be active in HHI. Results confirmed significantly increased response associated with HHI in the TPJ, hypothalamus and amygdala, but not in the medial prefrontal cortex. Future analysis of this corpus will include fine-grained characterization of verbal and non-verbal behaviours recorded during the interaction to investigate their neural correlates. This article is part of the theme issue ‘From social brains to social robots: applying neurocognitive insights to human–robot interaction'.


Sign in / Sign up

Export Citation Format

Share Document