scholarly journals Necessary Morphological Patches Extraction for Automatic Micro-Expression Recognition

2018 ◽  
Vol 8 (10) ◽  
pp. 1811 ◽  
Author(s):  
Yue Zhao ◽  
Jiancheng Xu

Micro expressions are usually subtle and brief facial expressions that humans use to hide their true emotional states. In recent years, micro-expression recognition has attracted wide attention in the fields of psychology, mass media, and computer vision. The shortest micro expression lasts only 1/25 s. Furthermore, different from macro-expressions, micro-expressions have considerable low intensity and inadequate contraction of the facial muscles. Based on these characteristics, automatic micro-expression detection and recognition are great challenges in the field of computer vision. In this paper, we propose a novel automatic facial expression recognition framework based on necessary morphological patches (NMPs) to better detect and identify micro expressions. Micro expression is a subconscious facial muscle response. It is not controlled by the rational thought of the brain. Therefore, it calls on a few facial muscles and has local properties. NMPs are the facial regions that must be involved when a micro expression occurs. NMPs were screened based on weighting the facial active patches instead of the holistic utilization of the entire facial area. Firstly, we manually define the active facial patches according to the facial landmark coordinates and the facial action coding system (FACS). Secondly, we use a LBP-TOP descriptor to extract features in these patches and the Entropy-Weight method to select NMP. Finally, we obtain the weighted LBP-TOP features of these NMP. We test on two recent publicly available datasets: CASME II and SMIC database that provided sufficient samples. Compared with many recent state-of-the-art approaches, our method achieves more promising recognition results.

2018 ◽  
Vol 4 (10) ◽  
pp. 119 ◽  
Author(s):  
Adrian Davison ◽  
Walied Merghani ◽  
Moi Yap

Micro-expressions are brief spontaneous facial expressions that appear on a face when a person conceals an emotion, making them different to normal facial expressions in subtlety and duration. Currently, emotion classes within the CASME II dataset (Chinese Academy of Sciences Micro-expression II) are based on Action Units and self-reports, creating conflicts during machine learning training. We will show that classifying expressions using Action Units, instead of predicted emotion, removes the potential bias of human reporting. The proposed classes are tested using LBP-TOP (Local Binary Patterns from Three Orthogonal Planes), HOOF (Histograms of Oriented Optical Flow) and HOG 3D (3D Histogram of Oriented Gradient) feature descriptors. The experiments are evaluated on two benchmark FACS (Facial Action Coding System) coded datasets: CASME II and SAMM (A Spontaneous Micro-Facial Movement). The best result achieves 86.35% accuracy when classifying the proposed 5 classes on CASME II using HOG 3D, outperforming the result of the state-of-the-art 5-class emotional-based classification in CASME II. Results indicate that classification based on Action Units provides an objective method to improve micro-expression recognition.


Symmetry ◽  
2019 ◽  
Vol 11 (4) ◽  
pp. 497 ◽  
Author(s):  
Yue Zhao ◽  
Jiancheng Xu

Micro-expression is a spontaneous emotional representation that is not controlled by logic. A micro-expression is both transitory (short duration) and subtle (small intensity), so it is difficult to detect in people. Micro-expression detection is widely used in the fields of psychological analysis, criminal justice and human-computer interaction. Additionally, like traditional facial expressions, micro-expressions also have local muscle movement. Psychologists have shown micro-expressions have necessary morphological patches (NMPs), which are triggered by emotion. Furthermore, the objective of this paper is to sort and filter these NMPs and extract features from NMPs to train classifiers to recognize micro-expressions. Firstly, we use the optical flow method to compare the on-set frame and the apex frame of the micro-expression sequences. By doing this, we could find facial active patches. Secondly, to find the NMPs of micro-expressions, this study calculates the local binary pattern from three orthogonal planes (LBP-TOP) operators and cascades them with optical flow histograms to form the fusion features of the active patches. Finally, a random forest feature selection (RFFS) algorithm is used to identify the NMPs and to characterize them via support vector machine (SVM) classifier. We evaluated the proposed method on two popular publicly available databases: CASME II and SMIC. Results show that NMPs are statistically determined and contribute to significant discriminant ability instead of holistic utilization of all facial regions.


2020 ◽  
Vol 10 (14) ◽  
pp. 4959
Author(s):  
Reda Belaiche ◽  
Yu Liu ◽  
Cyrille Migniot ◽  
Dominique Ginhac ◽  
Fan Yang

Micro-Expression (ME) recognition is a hot topic in computer vision as it presents a gateway to capture and understand daily human emotions. It is nonetheless a challenging problem due to ME typically being transient (lasting less than 200 ms) and subtle. Recent advances in machine learning enable new and effective methods to be adopted for solving diverse computer vision tasks. In particular, the use of deep learning techniques on large datasets outperforms classical approaches based on classical machine learning which rely on hand-crafted features. Even though available datasets for spontaneous ME are scarce and much smaller, using off-the-shelf Convolutional Neural Networks (CNNs) still demonstrates satisfactory classification results. However, these networks are intense in terms of memory consumption and computational resources. This poses great challenges when deploying CNN-based solutions in many applications, such as driver monitoring and comprehension recognition in virtual classrooms, which demand fast and accurate recognition. As these networks were initially designed for tasks of different domains, they are over-parameterized and need to be optimized for ME recognition. In this paper, we propose a new network based on the well-known ResNet18 which we optimized for ME classification in two ways. Firstly, we reduced the depth of the network by removing residual layers. Secondly, we introduced a more compact representation of optical flow used as input to the network. We present extensive experiments and demonstrate that the proposed network obtains accuracies comparable to the state-of-the-art methods while significantly reducing the necessary memory space. Our best classification accuracy was 60.17% on the challenging composite dataset containing five objectives classes. Our method takes only 24.6 ms for classifying a ME video clip (less than the occurrence time of the shortest ME which lasts 40 ms). Our CNN design is suitable for real-time embedded applications with limited memory and computing resources.


Author(s):  
Xinyu Li ◽  
Guangshun Wei ◽  
Jie Wang ◽  
Yuanfeng Zhou

AbstractMicro-expression recognition is a substantive cross-study of psychology and computer science, and it has a wide range of applications (e.g., psychological and clinical diagnosis, emotional analysis, criminal investigation, etc.). However, the subtle and diverse changes in facial muscles make it difficult for existing methods to extract effective features, which limits the improvement of micro-expression recognition accuracy. Therefore, we propose a multi-scale joint feature network based on optical flow images for micro-expression recognition. First, we generate an optical flow image that reflects subtle facial motion information. The optical flow image is then fed into the multi-scale joint network for feature extraction and classification. The proposed joint feature module (JFM) integrates features from different layers, which is beneficial for the capture of micro-expression features with different amplitudes. To improve the recognition ability of the model, we also adopt a strategy for fusing the feature prediction results of the three JFMs with the backbone network. Our experimental results show that our method is superior to state-of-the-art methods on three benchmark datasets (SMIC, CASME II, and SAMM) and a combined dataset (3DB).


2020 ◽  
Vol 13 (6) ◽  
pp. 546-559
Author(s):  
Ulla Rosiani ◽  
◽  
Priska Choirina ◽  
Niyalatul Muna ◽  
Eko Mulyanto ◽  
...  

Micro-expression is an expression when a person tries to held or hidden, but the leak of this emotion still occurs in one or two areas of the face or maybe a short expression that across in the whole-face. Not more than 500ms, micro-expressions can be difficult to recognize and detect where the leakage area is located. This study presents a new method to recognize and detect the subtle motion on the facial components area using Phase Only Correlation algorithm with All Block Search (POC-ABS) to estimate the motion of all block areas. This block matching method is proposed by comparing each block in the two frames to determine whether there is movement or not. If the two blocks are identical, then the motion vector value is not displayed, whereas if the blocks are non-identical, the motion vector value of the POC is displayed. The motion vector, which is as a motion feature, estimates whether or not there are movements in the same block. In order to further confirm the reliability of the proposed method, two different classifiers were used for the micro-expression recognition of the CASME II dataset. The highest performance results are for SVM at 94.3 percent and for KNN at 95.6 percent. Finally, this algorithm detects leaks of motion based on the ratio of the motion vectors. The left and right eyebrows are dominant when expressing disgust, sadness, and surprise. Meanwhile, the movements of the right eye and left eye were the most dominant when the happiness expression.


Information ◽  
2020 ◽  
Vol 11 (8) ◽  
pp. 380
Author(s):  
Boyu Chen ◽  
Zhihao Zhang ◽  
Nian Liu ◽  
Yang Tan ◽  
Xinyu Liu ◽  
...  

A micro-expression is defined as an uncontrollable muscular movement shown on the face of humans when one is trying to conceal or repress his true emotions. Many researchers have applied the deep learning framework to micro-expression recognition in recent years. However, few have introduced the human visual attention mechanism to micro-expression recognition. In this study, we propose a three-dimensional (3D) spatiotemporal convolutional neural network with the convolutional block attention module (CBAM) for micro-expression recognition. First image sequences were input to a medium-sized convolutional neural network (CNN) to extract visual features. Afterwards, it learned to allocate the feature weights in an adaptive manner with the help of a convolutional block attention module. The method was testified in spontaneous micro-expression databases (Chinese Academy of Sciences Micro-expression II (CASME II), Spontaneous Micro-expression Database (SMIC)). The experimental results show that the 3D CNN with convolutional block attention module outperformed other algorithms in micro-expression recognition.


2015 ◽  
Vol 3 (1) ◽  
Author(s):  
Friska G. Batoteng ◽  
Taufiq F. Pasiak ◽  
Shane H. R. Ticoalu

Abstract: Facial expression recognition is one way to recognize emotions which has not received much attention. Muscles that form facial expressions known as musculli facial, muscles that move the face and form human facial expressions: happy, sad, angry, fearful, disgusted and surprised which are the six basic expressions of human emotion. Human facial expressions can be measured using FACS (Facial Action Coding System). This study aims to determine the facial muscles which most frequently used and most rarely used, and determine the emotion expression of Jokowi, a presidential candidate, through assessment of the facial muscles using FACS. This study is a retrospective descriptive study. The research samples are the whole photo of Jokowi’s facial expression at first presidential debate in 2014, about 30 photos. Samples were taken from a video debate and confirmed to be a photo using Jokowi’s facial expressions which then further analyzed using FACS. The research showed that the most used action units and facial muscle is AU 1 whose work on frontal muscle pars medialis (14.75%). The least appear muscles on Jokowi’s facial expressions were musculus orbicularis oculi, pars palpebralis and AU 24 musculus obicularis oris (0.82%). The dominant facial expressions was seen in Jokowi was sad facial expression (36.67%).Keywords: musculi facialis, facial expression, expression of emotion, FACSAbstrak: Pengenalan ekspresi wajah adalah salah satu cara untuk mengenali emosi yang belum banyak diperhatikan. Otot-otot yang membentuk ekspresi wajah yaitu musculli facialis yang merupakan otot-otot penggerak wajah dan membentuk ekspresi – ekspresi wajah manusia yaitu bahagia, sedih, marah, takut, jijik dan terkejut yang merupakan 6 dasar ekspresi emosi manusia. Ekspresi wajah manusia dapat diukur dengan menggunakan parameter FACS (Facial Action Coding System). Penelitian ini bertujuan untuk mengetahui musculi facialis yang paling sering digunakan dan yang paling jarang digunakan, serta untuk menentukan ekspresi emosi calon presiden Jokowi. Desain penelitian ini yaitu penelitian deskriptif dengan retrospektif. Sampel penelitian ialah seluruh foto ekspresi wajah Jokowi saat debat calon presiden pertama tahun 2014 sebanyak 30 foto. Sampel diambil dari video debat dan dikonfirmasi menjadi foto kemudian dianalisis lebih lanjut menggunakan FACS. Penelitian ini didapatkan hasil bahwa Musculi yang paling banyak digerakkan, yaitu Musculi frontalis pars medialis (14,75%). Musculi yang paling sedikit muncul pada ekspresi wajah Jokowi yaitu musculus orbicularis oculi, pars palpebralis dan musculus obicularis oris (0,82%). Ekspresi wajah yang dominan dinampakkan oleh Jokowi merupakan ekspresi wajah sedih (36,67%).Kata kunci: musculi facialis, ekspresi wajah, ekspresi emosi, FACS


2020 ◽  
Vol 6 (12) ◽  
pp. 130
Author(s):  
Adamu Muhammad Buhari ◽  
Chee-Pun Ooi ◽  
Vishnu Monn Baskaran ◽  
Raphaël C. W. Phan ◽  
KokSheik Wong ◽  
...  

Several studies on micro-expression recognition have contributed mainly to accuracy improvement. However, the computational complexity receives lesser attention comparatively and therefore increases the cost of micro-expression recognition for real-time application. In addition, majority of the existing approaches required at least two frames (i.e., onset and apex frames) to compute features of every sample. This paper puts forward new facial graph features based on 68-point landmarks using Facial Action Coding System (FACS). The proposed feature extraction technique (FACS-based graph features) utilizes facial landmark points to compute graph for different Action Units (AUs), where the measured distance and gradient of every segment within an AU graph is presented as feature. Moreover, the proposed technique processes ME recognition based on single input frame sample. Results indicate that the proposed FACS-baed graph features achieve up to 87.33% of recognition accuracy with F1-score of 0.87 using leave one subject out cross-validation on SAMM datasets. Besides, the proposed technique computes features at the speed of 2 ms per sample on Xeon Processor E5-2650 machine.


Author(s):  
Jacey-Lynn Minoi ◽  
Duncan Gillies

The aim of this chapter is to identify those face areas containing high facial expression information, which may be useful for facial expression analysis, face and facial expression recognition and synthesis. In the study of facial expression analysis, landmarks are usually placed on well-defined craniofacial features. In this experiment, the authors have selected a set of landmarks based on craniofacial anthropometry and associate each of the landmarks with facial muscles and the Facial Action Coding System (FACS) framework, which means to locate landmarks on less palpable areas that contain high facial expression mobility. The selected landmarks are statistically analysed in terms of facial muscles motion based on FACS. Given that human faces provide information to channel verbal and non-verbal communication: speech, facial expression of emotions, gestures, and other human communicative actions; hence, these cues may be significant in the identification of expressions such as pain, agony, anger, happiness, et cetera. Here, the authors describe the potential of computer-based models of three-dimensional (3D) facial expression analysis and the non-verbal communication recognition to assist in biometric recognition and clinical diagnosis.


Sensors ◽  
2019 ◽  
Vol 19 (24) ◽  
pp. 5553
Author(s):  
Yue Zhao ◽  
Jiancheng Xu

Human beings are particularly inclined to express real emotions through micro-expressions with subtle amplitude and short duration. Though people regularly recognize many distinct emotions, for the most part, research studies have been limited to six basic categories: happiness, surprise, sadness, anger, fear, and disgust. Like normal expressions (i.e., macro-expressions), most current research into micro-expression recognition focuses on these six basic emotions. This paper describes an important group of micro-expressions, which we call compound emotion categories. Compound micro-expressions are constructed by combining two basic micro-expressions but reflect more complex mental states and more abundant human facial emotions. In this study, we firstly synthesized a Compound Micro-expression Database (CMED) based on existing spontaneous micro-expression datasets. These subtle feature of micro-expression makes it difficult to observe its motion track and characteristics. Consequently, there are many challenges and limitations to synthetic compound micro-expression images. The proposed method firstly implemented Eulerian Video Magnification (EVM) method to enhance facial motion features of basic micro-expressions for generating compound images. The consistent and differential facial muscle articulations (typically referred to as action units) associated with each emotion category have been labeled to become the foundation of generating compound micro-expression. Secondly, we extracted the apex frames of CMED by 3D Fast Fourier Transform (3D-FFT). Moreover, the proposed method calculated the optical flow information between the onset frame and apex frame to produce an optical flow feature map. Finally, we designed a shallow network to extract high-level features of these optical flow maps. In this study, we synthesized four existing databases of spontaneous micro-expressions (CASME I, CASME II, CAS(ME)2, SAMM) to generate the CMED and test the validity of our network. Therefore, the deep network framework designed in this study can well recognize the emotional information of basic micro-expressions and compound micro-expressions.


Sign in / Sign up

Export Citation Format

Share Document