scholarly journals Objective Classes for Micro-Facial Expression Recognition

2018 ◽  
Vol 4 (10) ◽  
pp. 119 ◽  
Author(s):  
Adrian Davison ◽  
Walied Merghani ◽  
Moi Yap

Micro-expressions are brief spontaneous facial expressions that appear on a face when a person conceals an emotion, making them different to normal facial expressions in subtlety and duration. Currently, emotion classes within the CASME II dataset (Chinese Academy of Sciences Micro-expression II) are based on Action Units and self-reports, creating conflicts during machine learning training. We will show that classifying expressions using Action Units, instead of predicted emotion, removes the potential bias of human reporting. The proposed classes are tested using LBP-TOP (Local Binary Patterns from Three Orthogonal Planes), HOOF (Histograms of Oriented Optical Flow) and HOG 3D (3D Histogram of Oriented Gradient) feature descriptors. The experiments are evaluated on two benchmark FACS (Facial Action Coding System) coded datasets: CASME II and SAMM (A Spontaneous Micro-Facial Movement). The best result achieves 86.35% accuracy when classifying the proposed 5 classes on CASME II using HOG 3D, outperforming the result of the state-of-the-art 5-class emotional-based classification in CASME II. Results indicate that classification based on Action Units provides an objective method to improve micro-expression recognition.

2018 ◽  
Vol 8 (10) ◽  
pp. 1811 ◽  
Author(s):  
Yue Zhao ◽  
Jiancheng Xu

Micro expressions are usually subtle and brief facial expressions that humans use to hide their true emotional states. In recent years, micro-expression recognition has attracted wide attention in the fields of psychology, mass media, and computer vision. The shortest micro expression lasts only 1/25 s. Furthermore, different from macro-expressions, micro-expressions have considerable low intensity and inadequate contraction of the facial muscles. Based on these characteristics, automatic micro-expression detection and recognition are great challenges in the field of computer vision. In this paper, we propose a novel automatic facial expression recognition framework based on necessary morphological patches (NMPs) to better detect and identify micro expressions. Micro expression is a subconscious facial muscle response. It is not controlled by the rational thought of the brain. Therefore, it calls on a few facial muscles and has local properties. NMPs are the facial regions that must be involved when a micro expression occurs. NMPs were screened based on weighting the facial active patches instead of the holistic utilization of the entire facial area. Firstly, we manually define the active facial patches according to the facial landmark coordinates and the facial action coding system (FACS). Secondly, we use a LBP-TOP descriptor to extract features in these patches and the Entropy-Weight method to select NMP. Finally, we obtain the weighted LBP-TOP features of these NMP. We test on two recent publicly available datasets: CASME II and SMIC database that provided sufficient samples. Compared with many recent state-of-the-art approaches, our method achieves more promising recognition results.


2011 ◽  
pp. 255-317 ◽  
Author(s):  
Daijin Kim ◽  
Jaewon Sung

The facial expression has long been an interest for psychology, since Darwin published The expression of Emotions in Man and Animals (Darwin, C., 1899). Psychologists have studied to reveal the role and mechanism of the facial expression. One of the great discoveries of Darwin is that there exist prototypical facial expressions across multiple cultures on the earth, which provided the theoretical backgrounds for the vision researchers who tried to classify categories of the prototypical facial expressions from images. The representative 6 facial expressions are afraid, happy, sad, surprised, angry, and disgust (Mase, 1991; Yacoob and Davis, 1994). On the other hand, real facial expressions that we frequently meet in daily life consist of lots of distinct signals, which are subtly different. Further research on facial expressions required an object method to describe and measure the distinct activity of facial muscles. The facial action coding system (FACS), proposed by Hager and Ekman (1978), defines 46 distinct action units (AUs), each of which explains the activity of each distinct muscle or muscle group. The development of the objective description method also affected the vision researchers, who tried to detect the emergence of each AU (Tian et. al., 2001).


Symmetry ◽  
2019 ◽  
Vol 11 (4) ◽  
pp. 497 ◽  
Author(s):  
Yue Zhao ◽  
Jiancheng Xu

Micro-expression is a spontaneous emotional representation that is not controlled by logic. A micro-expression is both transitory (short duration) and subtle (small intensity), so it is difficult to detect in people. Micro-expression detection is widely used in the fields of psychological analysis, criminal justice and human-computer interaction. Additionally, like traditional facial expressions, micro-expressions also have local muscle movement. Psychologists have shown micro-expressions have necessary morphological patches (NMPs), which are triggered by emotion. Furthermore, the objective of this paper is to sort and filter these NMPs and extract features from NMPs to train classifiers to recognize micro-expressions. Firstly, we use the optical flow method to compare the on-set frame and the apex frame of the micro-expression sequences. By doing this, we could find facial active patches. Secondly, to find the NMPs of micro-expressions, this study calculates the local binary pattern from three orthogonal planes (LBP-TOP) operators and cascades them with optical flow histograms to form the fusion features of the active patches. Finally, a random forest feature selection (RFFS) algorithm is used to identify the NMPs and to characterize them via support vector machine (SVM) classifier. We evaluated the proposed method on two popular publicly available databases: CASME II and SMIC. Results show that NMPs are statistically determined and contribute to significant discriminant ability instead of holistic utilization of all facial regions.


2020 ◽  
pp. 59-69
Author(s):  
Walid Mahmod ◽  
Jane Stephan ◽  
Anmar Razzak

Automatic analysis of facial expressions is rapidly becoming an area of intense interest in computer vision and artificial intelligence research communities. In this paper an approach is presented for facial expression recognition of the six basic prototype expressions (i.e., joy, surprise, anger, sadness, fear, and disgust) based on Facial Action Coding System (FACS). The approach is attempting to utilize a combination of different transforms (Walid let hybrid transform); they consist of Fast Fourier Transform; Radon transform and Multiwavelet transform for the feature extraction. Korhonen Self Organizing Feature Map (SOFM) then used for patterns clustering based on the features obtained from the hybrid transform above. The result shows that the method has very good accuracy in facial expression recognition. However, the proposed method has many promising features that make it interesting. The approach provides a new method of feature extraction in which overcome the problem of the illumination, faces that varies from one individual to another quite considerably due to different age, ethnicity, gender and cosmetic also it does not require a precise normalization and lighting equalization. An average clustering accuracy of 94.8% is achieved for six basic expressions, where different databases had been used for the test of the method.


2018 ◽  
Vol 7 (3.20) ◽  
pp. 284
Author(s):  
Hamimah Ujir ◽  
Irwandi Hipiny ◽  
D N.F. Awang Iskandar

Most works in quantifying facial deformation are based on action units (AUs) provided by the Facial Action Coding System (FACS) which describes facial expressions in terms of forty-six component movements. AU corresponds to the movements of individual facial muscles. This paper presents a rule based approach to classify the AU which depends on certain facial features. This work only covers deformation of facial features based on posed Happy and the Sad expression obtained from the BU-4DFE database. Different studies refer to different combination of AUs that form Happy and Sad expression. According to the FACS rules lined in this work, an AU has more than one facial property that need to be observed. The intensity comparison and analysis on the AUs involved in Sad and Happy expression are presented. Additionally, dynamic analysis for AUs is studied to determine the temporal segment of expressions, i.e. duration of onset, apex and offset time. Our findings show that AU15, for sad expression, and AU12, for happy expression, show facial features deformation consistency for all properties during the expression period. However for AU1 and AU4, their properties’ intensity is different during the expression period. 


Author(s):  
Xinyu Li ◽  
Guangshun Wei ◽  
Jie Wang ◽  
Yuanfeng Zhou

AbstractMicro-expression recognition is a substantive cross-study of psychology and computer science, and it has a wide range of applications (e.g., psychological and clinical diagnosis, emotional analysis, criminal investigation, etc.). However, the subtle and diverse changes in facial muscles make it difficult for existing methods to extract effective features, which limits the improvement of micro-expression recognition accuracy. Therefore, we propose a multi-scale joint feature network based on optical flow images for micro-expression recognition. First, we generate an optical flow image that reflects subtle facial motion information. The optical flow image is then fed into the multi-scale joint network for feature extraction and classification. The proposed joint feature module (JFM) integrates features from different layers, which is beneficial for the capture of micro-expression features with different amplitudes. To improve the recognition ability of the model, we also adopt a strategy for fusing the feature prediction results of the three JFMs with the backbone network. Our experimental results show that our method is superior to state-of-the-art methods on three benchmark datasets (SMIC, CASME II, and SAMM) and a combined dataset (3DB).


PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0245117
Author(s):  
Catia Correia-Caeiro ◽  
Kathryn Holmes ◽  
Takako Miyabe-Nishiwaki

Facial expressions are complex and subtle signals, central for communication and emotion in social mammals. Traditionally, facial expressions have been classified as a whole, disregarding small but relevant differences in displays. Even with the same morphological configuration different information can be conveyed depending on the species. Due to a hardwired processing of faces in the human brain, humans are quick to attribute emotion, but have difficulty in registering facial movement units. The well-known human FACS (Facial Action Coding System) is the gold standard for objectively measuring facial expressions, and can be adapted through anatomical investigation and functional homologies for cross-species systematic comparisons. Here we aimed at developing a FACS for Japanese macaques, following established FACS methodology: first, we considered the species’ muscular facial plan; second, we ascertained functional homologies with other primate species; and finally, we categorised each independent facial movement into Action Units (AUs). Due to similarities in the rhesus and Japanese macaques’ facial musculature, the MaqFACS (previously developed for rhesus macaques) was used as a basis to extend the FACS tool to Japanese macaques, while highlighting the morphological and appearance changes differences between the two species. We documented 19 AUs, 15 Action Descriptors (ADs) and 3 Ear Action Units (EAUs) in Japanese macaques, with all movements of MaqFACS found in Japanese macaques. New movements were also observed, indicating a slightly larger repertoire than in rhesus or Barbary macaques. Our work reported here of the MaqFACS extension for Japanese macaques, when used together with the MaqFACS, comprises a valuable objective tool for the systematic and standardised analysis of facial expressions in Japanese macaques. The MaqFACS extension for Japanese macaques will now allow the investigation of the evolution of communication and emotion in primates, as well as contribute to improving the welfare of individuals, particularly in captivity and laboratory settings.


2020 ◽  
Vol 13 (6) ◽  
pp. 546-559
Author(s):  
Ulla Rosiani ◽  
◽  
Priska Choirina ◽  
Niyalatul Muna ◽  
Eko Mulyanto ◽  
...  

Micro-expression is an expression when a person tries to held or hidden, but the leak of this emotion still occurs in one or two areas of the face or maybe a short expression that across in the whole-face. Not more than 500ms, micro-expressions can be difficult to recognize and detect where the leakage area is located. This study presents a new method to recognize and detect the subtle motion on the facial components area using Phase Only Correlation algorithm with All Block Search (POC-ABS) to estimate the motion of all block areas. This block matching method is proposed by comparing each block in the two frames to determine whether there is movement or not. If the two blocks are identical, then the motion vector value is not displayed, whereas if the blocks are non-identical, the motion vector value of the POC is displayed. The motion vector, which is as a motion feature, estimates whether or not there are movements in the same block. In order to further confirm the reliability of the proposed method, two different classifiers were used for the micro-expression recognition of the CASME II dataset. The highest performance results are for SVM at 94.3 percent and for KNN at 95.6 percent. Finally, this algorithm detects leaks of motion based on the ratio of the motion vectors. The left and right eyebrows are dominant when expressing disgust, sadness, and surprise. Meanwhile, the movements of the right eye and left eye were the most dominant when the happiness expression.


2021 ◽  
Vol 2066 (1) ◽  
pp. 012023
Author(s):  
Qun Xia ◽  
Xiaofeng Ding

Abstract The 21st century is the era of big data. All aspects of society, from facial expressions to national defense and military, will generate massive amounts of data. Facial expression recognition technology, as a new technology spawned in the era of big data, has broad applications The prospects are widely used in intelligent transportation, assisted medical care, distance education, interactive games and public safety. In recent years, it has attracted more scholars’ attention and has become another research hotspot in the field of computer vision and machine learning. The purpose of this article is to study the facial micro-expression recognition algorithm based on big data. This time, big data technology is used to analyze the algorithm. Big data can better solve the small changes in face recognition and complex data processing. This paper firstly summarizes the basic theory of big data, derives the core technology of big data, and analyzes its shortcomings and shortcomings based on the current research status of facial micro-expression in my country, and finally discusses the big data based on big data. Research on facial micro-expression recognition algorithm under the following. This article takes the research situation of the face micro-expression recognition by related companies as the survey object, and analyzes it through the literature data method, questionnaire survey method, mathematical statistics method and other research methods. Experimental results show that the lower the dimensionality reduction, the less classification time is used. When the dimensionality reduction is 45 dimensions, the recognition rate of facial expressions is the highest.


Sign in / Sign up

Export Citation Format

Share Document