facial expression recognition
Recently Published Documents


TOTAL DOCUMENTS

3365
(FIVE YEARS 1184)

H-INDEX

74
(FIVE YEARS 16)

2022 ◽  
Vol 8 ◽  
Author(s):  
Niyati Rawal ◽  
Dorothea Koert ◽  
Cigdem Turan ◽  
Kristian Kersting ◽  
Jan Peters ◽  
...  

The ability of a robot to generate appropriate facial expressions is a key aspect of perceived sociability in human-robot interaction. Yet many existing approaches rely on the use of a set of fixed, preprogrammed joint configurations for expression generation. Automating this process provides potential advantages to scale better to different robot types and various expressions. To this end, we introduce ExGenNet, a novel deep generative approach for facial expressions on humanoid robots. ExGenNets connect a generator network to reconstruct simplified facial images from robot joint configurations with a classifier network for state-of-the-art facial expression recognition. The robots’ joint configurations are optimized for various expressions by backpropagating the loss between the predicted expression and intended expression through the classification network and the generator network. To improve the transfer between human training images and images of different robots, we propose to use extracted features in the classifier as well as in the generator network. Unlike most studies on facial expression generation, ExGenNets can produce multiple configurations for each facial expression and be transferred between robots. Experimental evaluations on two robots with highly human-like faces, Alfie (Furhat Robot) and the android robot Elenoide, show that ExGenNet can successfully generate sets of joint configurations for predefined facial expressions on both robots. This ability of ExGenNet to generate realistic facial expressions was further validated in a pilot study where the majority of human subjects could accurately recognize most of the generated facial expressions on both the robots.


Author(s):  
Huilin Ge ◽  
Zhiyu Zhu ◽  
Yuewei Dai ◽  
Biao Wang ◽  
Xuedong Wu

Facial expression plays an important role in communicating emotions. In this paper, a robust method for recognizing facial expressions is proposed using the combination of appearance features. Traditionally, appearance features mainly divide any face image into regular matrices for the computation of facial expression recognition. However, in this paper, we have computed appearance features in specific regions by extracting facial components such as eyes, nose, mouth, and forehead, etc. The proposed approach mainly has five stages to detect facial expression viz. face detection and regions of interest extraction, feature extraction, pattern analysis using a local descriptor, the fusion of appearance features and finally classification using a Multiclass Support Vector Machine (MSVM). Results of the proposed method are compared with the earlier holistic representations for recognizing facial expressions, and it is found that the proposed method outperforms state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document