scholarly journals Robotics facial expression of anger in collaborative human–robot interaction

2019 ◽  
Vol 16 (1) ◽  
pp. 172988141881797 ◽  
Author(s):  
Mauricio E Reyes ◽  
Ivan V Meza ◽  
Luis A Pineda

The facial expression of angry emotion can be useful to direct the interaction between agents, especially in unclear and cluttered environments. During the presence of an angry face, a process of analysis and diagnosis is activated in the subject that notices it, which could impact its behavior toward the one who expresses the emotion. In order to study such an effect in human–robot interaction, an expressive robotics face was designed and constructed. The influence of this face on human action and attention was analyzed in two collaborative tasks. Results of a digital survey, experimental interaction, and a questionnaire indicated that anger is the best recognized universal facial expression, has a regulatory effect in human action, and induces human attention when an unclear condition arises during the task. An additional finding was that the prolonged presence of an angry face reduces its impact compared to positive expressions.

Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6438
Author(s):  
Chiara Filippini ◽  
David Perpetuini ◽  
Daniela Cardone ◽  
Arcangelo Merla

An intriguing challenge in the human–robot interaction field is the prospect of endowing robots with emotional intelligence to make the interaction more genuine, intuitive, and natural. A crucial aspect in achieving this goal is the robot’s capability to infer and interpret human emotions. Thanks to its design and open programming platform, the NAO humanoid robot is one of the most widely used agents for human interaction. As with person-to-person communication, facial expressions are the privileged channel for recognizing the interlocutor’s emotional expressions. Although NAO is equipped with a facial expression recognition module, specific use cases may require additional features and affective computing capabilities that are not currently available. This study proposes a highly accurate convolutional-neural-network-based facial expression recognition model that is able to further enhance the NAO robot’ awareness of human facial expressions and provide the robot with an interlocutor’s arousal level detection capability. Indeed, the model tested during human–robot interactions was 91% and 90% accurate in recognizing happy and sad facial expressions, respectively; 75% accurate in recognizing surprised and scared expressions; and less accurate in recognizing neutral and angry expressions. Finally, the model was successfully integrated into the NAO SDK, thus allowing for high-performing facial expression classification with an inference time of 0.34 ± 0.04 s.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 9848-9859 ◽  
Author(s):  
Jia Deng ◽  
Gaoyang Pang ◽  
Zhiyu Zhang ◽  
Zhibo Pang ◽  
Huayong Yang ◽  
...  

2020 ◽  
Vol 10 (17) ◽  
pp. 5757
Author(s):  
Elena Laudante ◽  
Alessandro Greco ◽  
Mario Caterino ◽  
Marcello Fera

In current industrial systems, automation is a very important aspect for assessing manufacturing production performance related to working times, accuracy of operations and quality. In particular, the introduction of a robotic system in the working area should guarantee some improvements, such as risks reduction for human operators, better quality results and a speed increase for production processes. In this context, human action remains still necessary to carry out part of the subtasks, as in the case of composites assembly processes. This study aims at presenting a case study regarding the reorganization of the working activity carried out in workstation in which a composite fuselage panel is assembled in order to demonstrate, by means of simulation tool, that some of the advantages previously listed can be achieved also in aerospace industry. In particular, an entire working process for composite fuselage panel assembling will be simulated and analyzed in order to demonstrate and verify the applicability and effectiveness of human–robot interaction (HRI), focusing on working times and ergonomics and respecting the constraints imposed by standards ISO 10218 and ISO TS 15066. Results show the effectiveness of HRI both in terms of assembly performance, by reducing working times and ergonomics—for which the simulation provides a very low risk index.


Author(s):  
Zhen-Tao Liu ◽  
Si-Han Li ◽  
Wei-Hua Cao ◽  
Dan-Yun Li ◽  
Man Hao ◽  
...  

The efficiency of facial expression recognition (FER) is important for human-robot interaction. Detection of the facial region, extraction of discriminative facial expression features, and identification of categories of facial expressions are all related to the recognition accuracy and time-efficiency. An FER framework is proposed, in which 2D Gabor and local binary pattern (LBP) are combined to extract discriminative features of salient facial expression patches, and extreme learning machine (ELM) is adopted to identify facial expression categories. The combination of 2D Gabor and LBP can not only describe multiscale and multidirectional textural features, but also capture small local details. The FER of ELM and support vector machine (SVM) is performed using the Japanese female facial expression database and extended Cohn-Kanade database, respectively, in which both ELM and SVM achieve an accuracy of more than 85%, and the computational efficiency of ELM is higher than that of SVM. The proposed framework has been used in the multimodal emotional communication based humans-robots interaction system, in which FER within 2 seconds enables real-time human-robot interaction.


Author(s):  
Mauro Dragone ◽  
Joe Saunders ◽  
Kerstin Dautenhahn

AbstractEnabling robots to seamlessly operate as part of smart spaces is an important and extended challenge for robotics R&D and a key enabler for a range of advanced robotic applications, such as AmbientAssisted Living (AAL) and home automation. The integration of these technologies is currently being pursued from two largely distinct view-points: On the one hand, people-centred initiatives focus on improving the user’s acceptance by tackling human-robot interaction (HRI) issues, often adopting a social robotic approach, and by giving to the designer and - in a limited degree – to the final user(s), control on personalization and product customisation features. On the other hand, technologically-driven initiatives are building impersonal but intelligent systems that are able to pro-actively and autonomously adapt their operations to fit changing requirements and evolving users’ needs, but which largely ignore and do not leverage human-robot interaction and may thus lead to poor user experience and user acceptance. In order to inform the development of a new generation of smart robotic spaces, this paper analyses and compares different research strands with a view to proposing possible integrated solutions with both advanced HRI and online adaptation capabilities.


2022 ◽  
Vol 8 ◽  
Author(s):  
Niyati Rawal ◽  
Dorothea Koert ◽  
Cigdem Turan ◽  
Kristian Kersting ◽  
Jan Peters ◽  
...  

The ability of a robot to generate appropriate facial expressions is a key aspect of perceived sociability in human-robot interaction. Yet many existing approaches rely on the use of a set of fixed, preprogrammed joint configurations for expression generation. Automating this process provides potential advantages to scale better to different robot types and various expressions. To this end, we introduce ExGenNet, a novel deep generative approach for facial expressions on humanoid robots. ExGenNets connect a generator network to reconstruct simplified facial images from robot joint configurations with a classifier network for state-of-the-art facial expression recognition. The robots’ joint configurations are optimized for various expressions by backpropagating the loss between the predicted expression and intended expression through the classification network and the generator network. To improve the transfer between human training images and images of different robots, we propose to use extracted features in the classifier as well as in the generator network. Unlike most studies on facial expression generation, ExGenNets can produce multiple configurations for each facial expression and be transferred between robots. Experimental evaluations on two robots with highly human-like faces, Alfie (Furhat Robot) and the android robot Elenoide, show that ExGenNet can successfully generate sets of joint configurations for predefined facial expressions on both robots. This ability of ExGenNet to generate realistic facial expressions was further validated in a pilot study where the majority of human subjects could accurately recognize most of the generated facial expressions on both the robots.


Sign in / Sign up

Export Citation Format

Share Document