scholarly journals Sparse Spatiotemporal Descriptor for Micro-Expression Recognition Using Enhanced Local Cube Binary Pattern

Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4437
Author(s):  
Shixin Cen ◽  
Yang Yu ◽  
Gang Yan ◽  
Ming Yu ◽  
Qing Yang

As a spontaneous facial expression, a micro-expression can reveal the psychological responses of human beings. Thus, micro-expression recognition can be widely studied and applied for its potentiality in clinical diagnosis, psychological research, and security. However, micro-expression recognition is a formidable challenge due to the short-lived time frame and low-intensity of the facial actions. In this paper, a sparse spatiotemporal descriptor for micro-expression recognition is developed by using the Enhanced Local Cube Binary Pattern (Enhanced LCBP). The proposed Enhanced LCBP is composed of three complementary binary features containing Spatial Difference Local Cube Binary Patterns (Spatial Difference LCBP), Temporal Direction Local Cube Binary Patterns (Temporal Direction LCBP), and Temporal Gradient Local Cube Binary Patterns (Temporal Gradient LCBP). With the application of Enhanced LCBP, it would no longer be a problem to provide binary features with spatiotemporal domain complementarity to capture subtle facial changes. In addition, due to the redundant information existing among the division grids, which affects the ability of descriptors to distinguish micro-expressions, the Multi-Regional Joint Sparse Learning is designed to perform feature selection for the division grids, thus paying more attention to the critical local regions. Finally, the Multi-kernel Support Vector Machine (SVM) is employed to fuse the selected features for the final classification. The proposed method exhibits great advantage and achieves promising results on four spontaneous micro-expression datasets. Through further observation of parameter evaluation and confusion matrix, the sufficiency and effectiveness of the proposed method are proved.

2019 ◽  
Vol 29 (01) ◽  
pp. 2050006 ◽  
Author(s):  
Qiuyu Li ◽  
Jun Yu ◽  
Toru Kurihara ◽  
Haiyan Zhang ◽  
Shu Zhan

Micro-expression is a kind of brief facial movements which could not be controlled by the nervous system. Micro-expression indicates that a person is hiding his true emotion consciously. Micro-expression recognition has various potential applications in public security and clinical medicine. Researches are focused on the automatic micro-expression recognition, because it is hard to recognize the micro-expression by people themselves. This research proposed a novel algorithm for automatic micro-expression recognition which combined a deep multi-task convolutional network for detecting the facial landmarks and a fused deep convolutional network for estimating the optical flow features of the micro-expression. First, the deep multi-task convolutional network is employed to detect facial landmarks with the manifold-related tasks for dividing the facial region. Furthermore, a fused convolutional network is applied for extracting the optical flow features from the facial regions which contain the muscle changes when the micro-expression appears. Because each video clip has many frames, the original optical flow features of the whole video clip will have high number of dimensions and redundant information. This research revises the optical flow features for reducing the redundant dimensions. Finally, a revised optical flow feature is applied for refining the information of the features and a support vector machine classifier is adopted for recognizing the micro-expression. The main contribution of work is combining the deep multi-task learning neural network and the fusion optical flow network for micro-expression recognition and revising the optical flow features for reducing the redundant dimensions. The results of experiments on two spontaneous micro-expression databases prove that our method achieved competitive performance in micro-expression recognition.


Symmetry ◽  
2019 ◽  
Vol 11 (4) ◽  
pp. 497 ◽  
Author(s):  
Yue Zhao ◽  
Jiancheng Xu

Micro-expression is a spontaneous emotional representation that is not controlled by logic. A micro-expression is both transitory (short duration) and subtle (small intensity), so it is difficult to detect in people. Micro-expression detection is widely used in the fields of psychological analysis, criminal justice and human-computer interaction. Additionally, like traditional facial expressions, micro-expressions also have local muscle movement. Psychologists have shown micro-expressions have necessary morphological patches (NMPs), which are triggered by emotion. Furthermore, the objective of this paper is to sort and filter these NMPs and extract features from NMPs to train classifiers to recognize micro-expressions. Firstly, we use the optical flow method to compare the on-set frame and the apex frame of the micro-expression sequences. By doing this, we could find facial active patches. Secondly, to find the NMPs of micro-expressions, this study calculates the local binary pattern from three orthogonal planes (LBP-TOP) operators and cascades them with optical flow histograms to form the fusion features of the active patches. Finally, a random forest feature selection (RFFS) algorithm is used to identify the NMPs and to characterize them via support vector machine (SVM) classifier. We evaluated the proposed method on two popular publicly available databases: CASME II and SMIC. Results show that NMPs are statistically determined and contribute to significant discriminant ability instead of holistic utilization of all facial regions.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-16
Author(s):  
Afan Hasan ◽  
Oya Kalıpsız ◽  
Selim Akyokuş

Although the vast majority of fundamental analysts believe that technical analysts’ estimates and technical indicators used in these analyses are unresponsive, recent research has revealed that both professionals and individual traders are using technical indicators. A correct estimate of the direction of the financial market is a very challenging activity, primarily due to the nonlinear nature of the financial time series. Deep learning and machine learning methods on the other hand have achieved very successful results in many different areas where human beings are challenged. In this study, technical indicators were integrated into the methods of deep learning and machine learning, and the behavior of the traders was modeled in order to increase the accuracy of forecasting of the financial market direction. A set of technical indicators has been examined based on their application in technical analysis as input features to predict the oncoming (one-period-ahead) direction of Istanbul Stock Exchange (BIST100) national index. To predict the direction of the index, Deep Neural Network (DNN), Support Vector Machine (SVM), Random Forest (RF), and Logistic Regression (LR) classification techniques are used. The performance of these models is evaluated on the basis of various performance metrics such as confusion matrix, compound return, and max drawdown.


2020 ◽  
Vol 79 (41-42) ◽  
pp. 31451-31465
Author(s):  
Hang Pan ◽  
Lun Xie ◽  
Zeping Lv ◽  
Juan Li ◽  
Zhiliang Wang

Sensors ◽  
2019 ◽  
Vol 19 (24) ◽  
pp. 5553
Author(s):  
Yue Zhao ◽  
Jiancheng Xu

Human beings are particularly inclined to express real emotions through micro-expressions with subtle amplitude and short duration. Though people regularly recognize many distinct emotions, for the most part, research studies have been limited to six basic categories: happiness, surprise, sadness, anger, fear, and disgust. Like normal expressions (i.e., macro-expressions), most current research into micro-expression recognition focuses on these six basic emotions. This paper describes an important group of micro-expressions, which we call compound emotion categories. Compound micro-expressions are constructed by combining two basic micro-expressions but reflect more complex mental states and more abundant human facial emotions. In this study, we firstly synthesized a Compound Micro-expression Database (CMED) based on existing spontaneous micro-expression datasets. These subtle feature of micro-expression makes it difficult to observe its motion track and characteristics. Consequently, there are many challenges and limitations to synthetic compound micro-expression images. The proposed method firstly implemented Eulerian Video Magnification (EVM) method to enhance facial motion features of basic micro-expressions for generating compound images. The consistent and differential facial muscle articulations (typically referred to as action units) associated with each emotion category have been labeled to become the foundation of generating compound micro-expression. Secondly, we extracted the apex frames of CMED by 3D Fast Fourier Transform (3D-FFT). Moreover, the proposed method calculated the optical flow information between the onset frame and apex frame to produce an optical flow feature map. Finally, we designed a shallow network to extract high-level features of these optical flow maps. In this study, we synthesized four existing databases of spontaneous micro-expressions (CASME I, CASME II, CAS(ME)2, SAMM) to generate the CMED and test the validity of our network. Therefore, the deep network framework designed in this study can well recognize the emotional information of basic micro-expressions and compound micro-expressions.


2021 ◽  
pp. 1-17
Author(s):  
Shixin Cen ◽  
Yang Yu ◽  
Gang Yan ◽  
Ming Yu ◽  
Yanlei Kong

As a spontaneous facial expression, micro-expression reveals the psychological responses of human beings. However, micro-expression recognition (MER) is highly susceptible to noise interference due to the short existing time and low-intensity of facial actions. Research on facial action coding systems explores the correlation between emotional states and facial actions, which provides more discriminative features. Therefore, based on the exploration of correlation information, the goal of our work is to propose a spatiotemporal network that is robust to low-intensity muscle movements for the MER task. Firstly, a multi-scale weighted module is proposed to encode the spatial global context, which is obtained by merging features of different resolutions preserved from the backbone network. Secondly, we propose a multi-task-based facial action learning module using the constraints of the correlation between muscle movement and micro-expressions to encode local action features. Besides, a clustering constraint term is introduced to restrict the feature distribution of similar actions to improve categories separability in feature space. Finally, the global context and local action features are stacked as high-quality spatial descriptions to predict micro-expressions by passing through the Convolutional Long Short-Term Memory (ConvLSTM) network. The proposed method is proved to outperform other mainstream methods through comparative experiments on the SMIC, CASME-I, and CASME-II datasets.


Bones are protecting many organs in the human body such as the lungs, brain, heart and other internal organs. Bone fracture is a common problem in human beings and may occur due to the high pressure that is applied to the bones as a result of an accident or any other reasons. X-ray (radiograph) is the noninvasive medical experimentthat helps doctors diagnose and present medical conditions. X-rays represent the oldest and most often used kind of medical imagery. Medical image processing attempts to enhance the bone fracture diagnosis processes by creating an automated system that can go through a large database of the X-ray images and identify the required diagnosis faster and with high accuracy than the regular diagnosis processes. In this paper, the lower leg bone (Tibia) fracture is studied and many novel features are extracted using various image processing techniques. The purpose of this research is to use new investigated features and classify the X-ray bone images as a fractured and non-fractured bone and make the system more applicable and closer to the user using the Graphical User Interface (GUI). The Tibia bone fracture detection system was developed in three main steps: the preprocessing step, feature extraction using wavelet analysis, gradient analysis, principal components (PCA), and edge detection methods and classification using Support Vector Machine (SVM). The results were produced using four possible outcomes from the confusion matrix which are TP, TN, FP, and FN. The classification process was repeated using different feature groups at a time and the resultant accuracies were ranged between 70%-80%.


2020 ◽  
Vol 15 ◽  
Author(s):  
Shuwen Zhang ◽  
Qiang Su ◽  
Qin Chen

Abstract: Major animal diseases pose a great threat to animal husbandry and human beings. With the deepening of globalization and the abundance of data resources, the prediction and analysis of animal diseases by using big data are becoming more and more important. The focus of machine learning is to make computers learn how to learn from data and use the learned experience to analyze and predict. Firstly, this paper introduces the animal epidemic situation and machine learning. Then it briefly introduces the application of machine learning in animal disease analysis and prediction. Machine learning is mainly divided into supervised learning and unsupervised learning. Supervised learning includes support vector machines, naive bayes, decision trees, random forests, logistic regression, artificial neural networks, deep learning, and AdaBoost. Unsupervised learning has maximum expectation algorithm, principal component analysis hierarchical clustering algorithm and maxent. Through the discussion of this paper, people have a clearer concept of machine learning and understand its application prospect in animal diseases.


Author(s):  
Niha Kamal Basha ◽  
Aisha Banu Wahab

: Absence seizure is a type of brain disorder in which subject get into sudden lapses in attention. Which means sudden change in brain stimulation. Most of this type of disorder is widely found in children’s (5-18 years). These Electroencephalogram (EEG) signals are captured with long term monitoring system and are analyzed individually. In this paper, a Convolutional Neural Network to extract single channel EEG seizure features like Power, log sum of wavelet transform, cross correlation, and mean phase variance of each frame in a windows are extracted after pre-processing and classify them into normal or absence seizure class, is proposed as an empowerment of monitoring system by automatic detection of absence seizure. The training data is collected from the normal and absence seizure subjects in the form of Electroencephalogram. The objective is to perform automatic detection of absence seizure using single channel electroencephalogram signal as input. Here the data is used to train the proposed Convolutional Neural Network to extract and classify absence seizure. The Convolutional Neural Network consist of three layers 1] convolutional layer – which extract the features in the form of vector 2] Pooling layer – the dimensionality of output from convolutional layer is reduced and 3] Fully connected layer–the activation function called soft-max is used to find the probability distribution of output class. This paper goes through the automatic detection of absence seizure in detail and provide the comparative analysis of classification between Support Vector Machine and Convolutional Neural Network. The proposed approach outperforms the performance of Support Vector Machine by 80% in automatic detection of absence seizure and validated using confusion matrix.


Sign in / Sign up

Export Citation Format

Share Document