Micro-expression Recognition Based on Facial Graph Representation Learning and Facial Action Unit Fusion

Author(s):  
Ling Lei ◽  
Tong Chen ◽  
Shigang Li ◽  
Jianfeng Li
Author(s):  
Guanbin Li ◽  
Xin Zhu ◽  
Yirui Zeng ◽  
Qing Wang ◽  
Liang Lin

Facial action unit (AU) recognition is a crucial task for facial expressions analysis and has attracted extensive attention in the field of artificial intelligence and computer vision. Existing works have either focused on designing or learning complex regional feature representations, or delved into various types of AU relationship modeling. Albeit with varying degrees of progress, it is still arduous for existing methods to handle complex situations. In this paper, we investigate how to integrate the semantic relationship propagation between AUs in a deep neural network framework to enhance the feature representation of facial regions, and propose an AU semantic relationship embedded representation learning (SRERL) framework. Specifically, by analyzing the symbiosis and mutual exclusion of AUs in various facial expressions, we organize the facial AUs in the form of structured knowledge-graph and integrate a Gated Graph Neural Network (GGNN) in a multi-scale CNN framework to propagate node information through the graph for generating enhanced AU representation. As the learned feature involves both the appearance characteristics and the AU relationship reasoning, the proposed model is more robust and can cope with more challenging cases, e.g., illumination change and partial occlusion. Extensive experiments on the two public benchmarks demonstrate that our method outperforms the previous work and achieves state of the art performance.


2020 ◽  
Vol 102 ◽  
pp. 107127 ◽  
Author(s):  
Nishant Sankaran ◽  
Deen Dayal Mohan ◽  
Nagashri N. Lakshminarayana ◽  
Srirangaraj Setlur ◽  
Venu Govindaraju

Author(s):  
Yingruo Fan ◽  
Zhaojiang Lin

Facial action unit (AU) intensity estimation aims to measure the intensity of different facial muscle movements. The external knowledge such as AU co-occurrence relationship is typically leveraged to improve performance. However, the AU characteristics may vary among individuals due to different physiological structures of human faces. To this end, we propose a novel geometry-guided representation learning (G2RL) method for facial AU intensity estimation. Specifically, our backbone model is based on a heatmap regression framework, where the produced heatmaps reflect rich information associated with AU intensities and their spatial distributions. Besides, we incorporate the external geometric knowledge into the backbone model to guide the training process via a learned projection matrix. The experimental results on two benchmark datasets demonstrate that our method is comparable with the state-of-the-art approaches, and validate the effectiveness of incorporating external geometric knowledge for facial AU intensity estimation.


2021 ◽  
Author(s):  
Yingjie Chen ◽  
Diqi Chen ◽  
Yizhou Wang ◽  
Tao Wang ◽  
Yun Liang

2021 ◽  
pp. 1-17
Author(s):  
Shixin Cen ◽  
Yang Yu ◽  
Gang Yan ◽  
Ming Yu ◽  
Yanlei Kong

As a spontaneous facial expression, micro-expression reveals the psychological responses of human beings. However, micro-expression recognition (MER) is highly susceptible to noise interference due to the short existing time and low-intensity of facial actions. Research on facial action coding systems explores the correlation between emotional states and facial actions, which provides more discriminative features. Therefore, based on the exploration of correlation information, the goal of our work is to propose a spatiotemporal network that is robust to low-intensity muscle movements for the MER task. Firstly, a multi-scale weighted module is proposed to encode the spatial global context, which is obtained by merging features of different resolutions preserved from the backbone network. Secondly, we propose a multi-task-based facial action learning module using the constraints of the correlation between muscle movement and micro-expressions to encode local action features. Besides, a clustering constraint term is introduced to restrict the feature distribution of similar actions to improve categories separability in feature space. Finally, the global context and local action features are stacked as high-quality spatial descriptions to predict micro-expressions by passing through the Convolutional Long Short-Term Memory (ConvLSTM) network. The proposed method is proved to outperform other mainstream methods through comparative experiments on the SMIC, CASME-I, and CASME-II datasets.


2016 ◽  
Vol 25 (8) ◽  
pp. 3931-3946 ◽  
Author(s):  
Kaili Zhao ◽  
Wen-Sheng Chu ◽  
Fernando De la Torre ◽  
Jeffrey F. Cohn ◽  
Honggang Zhang

Sign in / Sign up

Export Citation Format

Share Document