casme ii
Recently Published Documents


TOTAL DOCUMENTS

10
(FIVE YEARS 7)

H-INDEX

3
(FIVE YEARS 1)

Author(s):  
Xinyu Li ◽  
Guangshun Wei ◽  
Jie Wang ◽  
Yuanfeng Zhou

AbstractMicro-expression recognition is a substantive cross-study of psychology and computer science, and it has a wide range of applications (e.g., psychological and clinical diagnosis, emotional analysis, criminal investigation, etc.). However, the subtle and diverse changes in facial muscles make it difficult for existing methods to extract effective features, which limits the improvement of micro-expression recognition accuracy. Therefore, we propose a multi-scale joint feature network based on optical flow images for micro-expression recognition. First, we generate an optical flow image that reflects subtle facial motion information. The optical flow image is then fed into the multi-scale joint network for feature extraction and classification. The proposed joint feature module (JFM) integrates features from different layers, which is beneficial for the capture of micro-expression features with different amplitudes. To improve the recognition ability of the model, we also adopt a strategy for fusing the feature prediction results of the three JFMs with the backbone network. Our experimental results show that our method is superior to state-of-the-art methods on three benchmark datasets (SMIC, CASME II, and SAMM) and a combined dataset (3DB).


2021 ◽  
Vol 16 (1) ◽  
pp. 95-101
Author(s):  
Dibakar Raj Pant ◽  
Rolisha Sthapit

Facial expressions are due to the actions of the facial muscles located at different facial regions. These expressions are two types: Macro and Micro expressions. The second one is more important in computer vision. Analysis of micro expressions categorized by disgust, happiness, anger, sadness, surprise, contempt, and fear are challenging because of very fast and subtle facial movements. This article presents one machine learning method: Haar and two deep learning methods: Convolution Neural Network (CNN) and Recurrent Neural Network (RNN) to perform recognition of micro-facial expression analysis. First, Haar Cascade Classifier is used to detect the face as a pre-image-processing step. Secondly, those detected faces are passed through series of Convolutional Neural Network (CNN) layers for the features extraction. Thirdly, the Recurrent Neural Network (RNN) classifies micro facial expressions. Two types of data sets are used for training and testing of the proposed method: Chinese Academy of Sciences Micro-Expression II (CSAME II) and Spontaneous Actions and Micro-Movements (SAMM) database. The test accuracy of SAMM and CASME II are obtained as 84.76%, and 87% respectively. In addition, the distinction between micro facial expressions and non- micro facial expressions are analyzed by the ROC curve.


2020 ◽  
Vol 13 (6) ◽  
pp. 546-559
Author(s):  
Ulla Rosiani ◽  
◽  
Priska Choirina ◽  
Niyalatul Muna ◽  
Eko Mulyanto ◽  
...  

Micro-expression is an expression when a person tries to held or hidden, but the leak of this emotion still occurs in one or two areas of the face or maybe a short expression that across in the whole-face. Not more than 500ms, micro-expressions can be difficult to recognize and detect where the leakage area is located. This study presents a new method to recognize and detect the subtle motion on the facial components area using Phase Only Correlation algorithm with All Block Search (POC-ABS) to estimate the motion of all block areas. This block matching method is proposed by comparing each block in the two frames to determine whether there is movement or not. If the two blocks are identical, then the motion vector value is not displayed, whereas if the blocks are non-identical, the motion vector value of the POC is displayed. The motion vector, which is as a motion feature, estimates whether or not there are movements in the same block. In order to further confirm the reliability of the proposed method, two different classifiers were used for the micro-expression recognition of the CASME II dataset. The highest performance results are for SVM at 94.3 percent and for KNN at 95.6 percent. Finally, this algorithm detects leaks of motion based on the ratio of the motion vectors. The left and right eyebrows are dominant when expressing disgust, sadness, and surprise. Meanwhile, the movements of the right eye and left eye were the most dominant when the happiness expression.


2020 ◽  
Vol 7 (1) ◽  
pp. 73-78
Author(s):  
Ulla Delfana Rosiani ◽  
Priska Choirina
Keyword(s):  

Penelitian dan pengamatan gerakan ekspresi mikro memerlukan tahap pre-proses yang cermat, karena berkaitan dengan pengamatan gerakan yang sangat halus dan durasi yang sangat cepat. Pada tahap ini, pendeteksian dan pelacakan area wajah harus selalu tepat agar pengamatan gerakan yang dilakukan di area wajah bisa akurat. Beberapa sampel video dataset ekspresi mikro menunjukkan adanya gerakan wajah yang diikuti dengan gerakan kecil pada bagian kepala. Gerakan-gerakan diluar gerakan pada area wajah ini akan mempengaruhi kinerja dari sistem pengenalan ekspresi mikro. Penelitian ini melakukan pendeteksian dan pelacakan lokasi wajah selama terjadinya gerakan ekspresi mikro. Digunakan metode Viola-Jones untuk deteksi wajah dan metode Kanade Lucas Tomasi (KLT) untuk melacak titik fitur. Setelah wajah terdeteksi dengan tepat, dilakukan pelacakan titik pada ujung hidung. Pelacakan gerakan pada ujung hidung ini diikuti oleh pembentukan area wajah. Sehingga area wajah akan selalu tepat posisinya walaupun ada gerakan di kepala. Data uji dan testing yang digunakan dalam penelitian ini adalah dataset CASME II. Pengujian dari penelitian ini diperoleh hasil bahwa pedeteksian dan pelacakan area wajah dapat dilakukan dengan tepat dan akurat pada semua data uji. Diharapkan dengan pembentukan area wajah yang selalu terlacak dengan tepat ini, proses berikutnya dari pengenalan ekspresi mikro dapat berjalan dengan baik.


Information ◽  
2020 ◽  
Vol 11 (8) ◽  
pp. 380
Author(s):  
Boyu Chen ◽  
Zhihao Zhang ◽  
Nian Liu ◽  
Yang Tan ◽  
Xinyu Liu ◽  
...  

A micro-expression is defined as an uncontrollable muscular movement shown on the face of humans when one is trying to conceal or repress his true emotions. Many researchers have applied the deep learning framework to micro-expression recognition in recent years. However, few have introduced the human visual attention mechanism to micro-expression recognition. In this study, we propose a three-dimensional (3D) spatiotemporal convolutional neural network with the convolutional block attention module (CBAM) for micro-expression recognition. First image sequences were input to a medium-sized convolutional neural network (CNN) to extract visual features. Afterwards, it learned to allocate the feature weights in an adaptive manner with the help of a convolutional block attention module. The method was testified in spontaneous micro-expression databases (Chinese Academy of Sciences Micro-expression II (CASME II), Spontaneous Micro-expression Database (SMIC)). The experimental results show that the 3D CNN with convolutional block attention module outperformed other algorithms in micro-expression recognition.


Sensors ◽  
2019 ◽  
Vol 19 (24) ◽  
pp. 5553
Author(s):  
Yue Zhao ◽  
Jiancheng Xu

Human beings are particularly inclined to express real emotions through micro-expressions with subtle amplitude and short duration. Though people regularly recognize many distinct emotions, for the most part, research studies have been limited to six basic categories: happiness, surprise, sadness, anger, fear, and disgust. Like normal expressions (i.e., macro-expressions), most current research into micro-expression recognition focuses on these six basic emotions. This paper describes an important group of micro-expressions, which we call compound emotion categories. Compound micro-expressions are constructed by combining two basic micro-expressions but reflect more complex mental states and more abundant human facial emotions. In this study, we firstly synthesized a Compound Micro-expression Database (CMED) based on existing spontaneous micro-expression datasets. These subtle feature of micro-expression makes it difficult to observe its motion track and characteristics. Consequently, there are many challenges and limitations to synthetic compound micro-expression images. The proposed method firstly implemented Eulerian Video Magnification (EVM) method to enhance facial motion features of basic micro-expressions for generating compound images. The consistent and differential facial muscle articulations (typically referred to as action units) associated with each emotion category have been labeled to become the foundation of generating compound micro-expression. Secondly, we extracted the apex frames of CMED by 3D Fast Fourier Transform (3D-FFT). Moreover, the proposed method calculated the optical flow information between the onset frame and apex frame to produce an optical flow feature map. Finally, we designed a shallow network to extract high-level features of these optical flow maps. In this study, we synthesized four existing databases of spontaneous micro-expressions (CASME I, CASME II, CAS(ME)2, SAMM) to generate the CMED and test the validity of our network. Therefore, the deep network framework designed in this study can well recognize the emotional information of basic micro-expressions and compound micro-expressions.


Symmetry ◽  
2019 ◽  
Vol 11 (4) ◽  
pp. 497 ◽  
Author(s):  
Yue Zhao ◽  
Jiancheng Xu

Micro-expression is a spontaneous emotional representation that is not controlled by logic. A micro-expression is both transitory (short duration) and subtle (small intensity), so it is difficult to detect in people. Micro-expression detection is widely used in the fields of psychological analysis, criminal justice and human-computer interaction. Additionally, like traditional facial expressions, micro-expressions also have local muscle movement. Psychologists have shown micro-expressions have necessary morphological patches (NMPs), which are triggered by emotion. Furthermore, the objective of this paper is to sort and filter these NMPs and extract features from NMPs to train classifiers to recognize micro-expressions. Firstly, we use the optical flow method to compare the on-set frame and the apex frame of the micro-expression sequences. By doing this, we could find facial active patches. Secondly, to find the NMPs of micro-expressions, this study calculates the local binary pattern from three orthogonal planes (LBP-TOP) operators and cascades them with optical flow histograms to form the fusion features of the active patches. Finally, a random forest feature selection (RFFS) algorithm is used to identify the NMPs and to characterize them via support vector machine (SVM) classifier. We evaluated the proposed method on two popular publicly available databases: CASME II and SMIC. Results show that NMPs are statistically determined and contribute to significant discriminant ability instead of holistic utilization of all facial regions.


2018 ◽  
Vol 4 (10) ◽  
pp. 119 ◽  
Author(s):  
Adrian Davison ◽  
Walied Merghani ◽  
Moi Yap

Micro-expressions are brief spontaneous facial expressions that appear on a face when a person conceals an emotion, making them different to normal facial expressions in subtlety and duration. Currently, emotion classes within the CASME II dataset (Chinese Academy of Sciences Micro-expression II) are based on Action Units and self-reports, creating conflicts during machine learning training. We will show that classifying expressions using Action Units, instead of predicted emotion, removes the potential bias of human reporting. The proposed classes are tested using LBP-TOP (Local Binary Patterns from Three Orthogonal Planes), HOOF (Histograms of Oriented Optical Flow) and HOG 3D (3D Histogram of Oriented Gradient) feature descriptors. The experiments are evaluated on two benchmark FACS (Facial Action Coding System) coded datasets: CASME II and SAMM (A Spontaneous Micro-Facial Movement). The best result achieves 86.35% accuracy when classifying the proposed 5 classes on CASME II using HOG 3D, outperforming the result of the state-of-the-art 5-class emotional-based classification in CASME II. Results indicate that classification based on Action Units provides an objective method to improve micro-expression recognition.


2018 ◽  
Vol 8 (10) ◽  
pp. 1811 ◽  
Author(s):  
Yue Zhao ◽  
Jiancheng Xu

Micro expressions are usually subtle and brief facial expressions that humans use to hide their true emotional states. In recent years, micro-expression recognition has attracted wide attention in the fields of psychology, mass media, and computer vision. The shortest micro expression lasts only 1/25 s. Furthermore, different from macro-expressions, micro-expressions have considerable low intensity and inadequate contraction of the facial muscles. Based on these characteristics, automatic micro-expression detection and recognition are great challenges in the field of computer vision. In this paper, we propose a novel automatic facial expression recognition framework based on necessary morphological patches (NMPs) to better detect and identify micro expressions. Micro expression is a subconscious facial muscle response. It is not controlled by the rational thought of the brain. Therefore, it calls on a few facial muscles and has local properties. NMPs are the facial regions that must be involved when a micro expression occurs. NMPs were screened based on weighting the facial active patches instead of the holistic utilization of the entire facial area. Firstly, we manually define the active facial patches according to the facial landmark coordinates and the facial action coding system (FACS). Secondly, we use a LBP-TOP descriptor to extract features in these patches and the Entropy-Weight method to select NMP. Finally, we obtain the weighted LBP-TOP features of these NMP. We test on two recent publicly available datasets: CASME II and SMIC database that provided sufficient samples. Compared with many recent state-of-the-art approaches, our method achieves more promising recognition results.


PLoS ONE ◽  
2014 ◽  
Vol 9 (1) ◽  
pp. e86041 ◽  
Author(s):  
Wen-Jing Yan ◽  
Xiaobai Li ◽  
Su-Jing Wang ◽  
Guoying Zhao ◽  
Yong-Jin Liu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document