Micro-expression recognition based on facial action learning with muscle movement constraints

2021 ◽  
pp. 1-17
Author(s):  
Shixin Cen ◽  
Yang Yu ◽  
Gang Yan ◽  
Ming Yu ◽  
Yanlei Kong

As a spontaneous facial expression, micro-expression reveals the psychological responses of human beings. However, micro-expression recognition (MER) is highly susceptible to noise interference due to the short existing time and low-intensity of facial actions. Research on facial action coding systems explores the correlation between emotional states and facial actions, which provides more discriminative features. Therefore, based on the exploration of correlation information, the goal of our work is to propose a spatiotemporal network that is robust to low-intensity muscle movements for the MER task. Firstly, a multi-scale weighted module is proposed to encode the spatial global context, which is obtained by merging features of different resolutions preserved from the backbone network. Secondly, we propose a multi-task-based facial action learning module using the constraints of the correlation between muscle movement and micro-expressions to encode local action features. Besides, a clustering constraint term is introduced to restrict the feature distribution of similar actions to improve categories separability in feature space. Finally, the global context and local action features are stacked as high-quality spatial descriptions to predict micro-expressions by passing through the Convolutional Long Short-Term Memory (ConvLSTM) network. The proposed method is proved to outperform other mainstream methods through comparative experiments on the SMIC, CASME-I, and CASME-II datasets.

Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4437
Author(s):  
Shixin Cen ◽  
Yang Yu ◽  
Gang Yan ◽  
Ming Yu ◽  
Qing Yang

As a spontaneous facial expression, a micro-expression can reveal the psychological responses of human beings. Thus, micro-expression recognition can be widely studied and applied for its potentiality in clinical diagnosis, psychological research, and security. However, micro-expression recognition is a formidable challenge due to the short-lived time frame and low-intensity of the facial actions. In this paper, a sparse spatiotemporal descriptor for micro-expression recognition is developed by using the Enhanced Local Cube Binary Pattern (Enhanced LCBP). The proposed Enhanced LCBP is composed of three complementary binary features containing Spatial Difference Local Cube Binary Patterns (Spatial Difference LCBP), Temporal Direction Local Cube Binary Patterns (Temporal Direction LCBP), and Temporal Gradient Local Cube Binary Patterns (Temporal Gradient LCBP). With the application of Enhanced LCBP, it would no longer be a problem to provide binary features with spatiotemporal domain complementarity to capture subtle facial changes. In addition, due to the redundant information existing among the division grids, which affects the ability of descriptors to distinguish micro-expressions, the Multi-Regional Joint Sparse Learning is designed to perform feature selection for the division grids, thus paying more attention to the critical local regions. Finally, the Multi-kernel Support Vector Machine (SVM) is employed to fuse the selected features for the final classification. The proposed method exhibits great advantage and achieves promising results on four spontaneous micro-expression datasets. Through further observation of parameter evaluation and confusion matrix, the sufficiency and effectiveness of the proposed method are proved.


Author(s):  
Trang Thanh Quynh Le ◽  
Thuong-Khanh Tran ◽  
Manjeet Rege

Facial micro-expression is a subtle and involuntary facial expression that exhibits short duration and low intensity where hidden feelings can be disclosed. The field of micro-expression analysis has been receiving substantial awareness due to its potential values in a wide variety of practical applications. A number of studies have proposed sophisticated hand-crafted feature representations in order to leverage the task of automatic micro-expression recognition. This paper employs a dynamic image computation method for feature extraction so that features can be learned on certain localized facial regions along with deep convolutional networks to identify micro-expressions presented in the extracted dynamic images. The proposed framework is simple as opposed to other existing frameworks which used complex hand-crafted feature descriptors. For performance evaluation, the framework is tested on three publicly available databases, as well as on the integrated database in which individual databases are merged into a data pool. Impressive results from the series of experimental work show that the technique is promising in recognizing micro-expressions.


Sensors ◽  
2019 ◽  
Vol 19 (24) ◽  
pp. 5553
Author(s):  
Yue Zhao ◽  
Jiancheng Xu

Human beings are particularly inclined to express real emotions through micro-expressions with subtle amplitude and short duration. Though people regularly recognize many distinct emotions, for the most part, research studies have been limited to six basic categories: happiness, surprise, sadness, anger, fear, and disgust. Like normal expressions (i.e., macro-expressions), most current research into micro-expression recognition focuses on these six basic emotions. This paper describes an important group of micro-expressions, which we call compound emotion categories. Compound micro-expressions are constructed by combining two basic micro-expressions but reflect more complex mental states and more abundant human facial emotions. In this study, we firstly synthesized a Compound Micro-expression Database (CMED) based on existing spontaneous micro-expression datasets. These subtle feature of micro-expression makes it difficult to observe its motion track and characteristics. Consequently, there are many challenges and limitations to synthetic compound micro-expression images. The proposed method firstly implemented Eulerian Video Magnification (EVM) method to enhance facial motion features of basic micro-expressions for generating compound images. The consistent and differential facial muscle articulations (typically referred to as action units) associated with each emotion category have been labeled to become the foundation of generating compound micro-expression. Secondly, we extracted the apex frames of CMED by 3D Fast Fourier Transform (3D-FFT). Moreover, the proposed method calculated the optical flow information between the onset frame and apex frame to produce an optical flow feature map. Finally, we designed a shallow network to extract high-level features of these optical flow maps. In this study, we synthesized four existing databases of spontaneous micro-expressions (CASME I, CASME II, CAS(ME)2, SAMM) to generate the CMED and test the validity of our network. Therefore, the deep network framework designed in this study can well recognize the emotional information of basic micro-expressions and compound micro-expressions.


In this paper we are proposing a compact CNN model for facial expression recognition. Expression recognition on the low quality images are much more challenging and interesting due to the presence of low-intensity expressions. These low intensity expressions are difficult to distinguish with insufficient image resolution. Data collection for FER is expensive and time-consuming. Researches indicates the fact that downloaded images from the Internet is very useful to model and train expression recognition problem. We use extra datasets to improve the training of facial expression recognition, each representing specific data source. Moreover, to prevent subjective annotation, each dataset is labeled with different approaches to ensure annotation qualities. Recognizing the precise and exact expression from a variety of expressions of different people is a huge problem. To solve this problem, we proposed an Emotion Detection Model to extract emotions from the given input image. This work mainly focuses on the psychological approach of color circle-emotion relation[1] to find the accurate emotion from the input image. Initially the whole image is preprocessed and pixel by pixel data is studied. And the combinations of the circles based on combined data will result in a new color. This resulted color will be directly correlated to a particular emotion. Based on the psychological aspects the output will be of reasonable accuracy. The major application of our work is to predict a person’s emotion based on his face images or video frames This can even be applied for evaluating the public opinion relating to a particular movie, form the video reaction posts on social Medias. One of the diverse applications of our system is to understand the students learning from their emotions. Human beings shows their emotional states and intentions through facial expressions.. Facial expressions are powerful and natural methods that emphasize the emotional status of humans .The approach used in this work successfully exploits temporal information and it improves the accuracies on the public benchmarking databases. The basic facial expressions are happiness, fear, anger, disgust sadness, and surprise[2]. Contempt was subsequently added as one of the basic emotions. Having sufficient well labeled training data with variations of the populations and environments is important for the design of a deep expression recognition system .Behaviors, poses, facial expressions, actions and speech are considered as channels, which convey human emotions. Lot of research works are going on in this field to explore the correlation between the above mentioned channels and emotions. This paper highlights on the development of a system which automatically recognizes the


2020 ◽  
pp. 5-13
Author(s):  
Vishal Dubey ◽  
◽  
◽  
◽  
Bhavya Takkar ◽  
...  

Micro-expression comes under nonverbal communication, and for a matter of fact, it appears for minute fractions of a second. One cannot control micro-expression as it tells about our actual state emotionally, even if we try to hide or conceal our genuine emotions. As we know that micro-expressions are very rapid due to which it becomes challenging for any human being to detect it with bare eyes. This subtle-expression is spontaneous, and involuntary gives the emotional response. It happens when a person wants to conceal the specific emotion, but the brain is reacting appropriately to what that person is feeling then. Due to which the person displays their true feelings very briefly and later tries to make a false emotional response. Human emotions tend to last about 0.5 - 4.0 seconds, whereas micro-expression can last less than 1/2 of a second. On comparing micro-expression with regular facial expressions, it is found that for micro-expression, it is complicated to hide responses of a particular situation. Micro-expressions cannot be controlled because of the short time interval, but with a high-speed camera, we can capture one's expressions and replay them at a slow speed. Over the last ten years, researchers from all over the globe are researching automatic micro-expression recognition in the fields of computer science, security, psychology, and many more. The objective of this paper is to provide insight regarding micro-expression analysis using 3D CNN. A lot of datasets of micro-expression have been released in the last decade, we have performed this experiment on SMIC micro-expression dataset and compared the results after applying two different activation functions.


Sign in / Sign up

Export Citation Format

Share Document