Spatiotemporal Features and Local Relationship Learning for Facial Action Unit Intensity Regression

Author(s):  
Chao Wei ◽  
Ke Lu ◽  
Wei Gan ◽  
Jian Xue
2009 ◽  
Vol 35 (2) ◽  
pp. 198-201 ◽  
Author(s):  
Lei WANG ◽  
Bei-Ji ZOU ◽  
Xiao-Ning PENG

Author(s):  
Dakai Ren ◽  
Xiangmin Wen ◽  
Jiazhong Chen ◽  
Yu Han ◽  
Shiqi Zhang

Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4222
Author(s):  
Shushi Namba ◽  
Wataru Sato ◽  
Masaki Osumi ◽  
Koh Shimokawa

In the field of affective computing, achieving accurate automatic detection of facial movements is an important issue, and great progress has already been made. However, a systematic evaluation of systems that now have access to the dynamic facial database remains an unmet need. This study compared the performance of three systems (FaceReader, OpenFace, AFARtoolbox) that detect each facial movement corresponding to an action unit (AU) derived from the Facial Action Coding System. All machines could detect the presence of AUs from the dynamic facial database at a level above chance. Moreover, OpenFace and AFAR provided higher area under the receiver operating characteristic curve values compared to FaceReader. In addition, several confusion biases of facial components (e.g., AU12 and AU14) were observed to be related to each automated AU detection system and the static mode was superior to dynamic mode for analyzing the posed facial database. These findings demonstrate the features of prediction patterns for each system and provide guidance for research on facial expressions.


Author(s):  
Guanbin Li ◽  
Xin Zhu ◽  
Yirui Zeng ◽  
Qing Wang ◽  
Liang Lin

Facial action unit (AU) recognition is a crucial task for facial expressions analysis and has attracted extensive attention in the field of artificial intelligence and computer vision. Existing works have either focused on designing or learning complex regional feature representations, or delved into various types of AU relationship modeling. Albeit with varying degrees of progress, it is still arduous for existing methods to handle complex situations. In this paper, we investigate how to integrate the semantic relationship propagation between AUs in a deep neural network framework to enhance the feature representation of facial regions, and propose an AU semantic relationship embedded representation learning (SRERL) framework. Specifically, by analyzing the symbiosis and mutual exclusion of AUs in various facial expressions, we organize the facial AUs in the form of structured knowledge-graph and integrate a Gated Graph Neural Network (GGNN) in a multi-scale CNN framework to propagate node information through the graph for generating enhanced AU representation. As the learned feature involves both the appearance characteristics and the AU relationship reasoning, the proposed model is more robust and can cope with more challenging cases, e.g., illumination change and partial occlusion. Extensive experiments on the two public benchmarks demonstrate that our method outperforms the previous work and achieves state of the art performance.


2018 ◽  
Vol 40 (11) ◽  
pp. 2583-2596 ◽  
Author(s):  
Wei Li ◽  
Farnaz Abtahi ◽  
Zhigang Zhu ◽  
Lijun Yin

Author(s):  
Habibullah Akbar ◽  
Sintia Dewi ◽  
Yuli Azmi Rozali ◽  
Lita Patricia Lunanta ◽  
Nizirwan Anwar ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document