scholarly journals 3-Dimensional facial expression recognition in human using multi-points warping

2019 ◽  
Vol 20 (1) ◽  
Author(s):  
Olalekan Agbolade ◽  
Azree Nazri ◽  
Razali Yaakob ◽  
Abdul Azim Ghani ◽  
Yoke Kqueen Cheah

Abstract Background Expression in H-sapiens plays a remarkable role when it comes to social communication. The identification of this expression by human beings is relatively easy and accurate. However, achieving the same result in 3D by machine remains a challenge in computer vision. This is due to the current challenges facing facial data acquisition in 3D; such as lack of homology and complex mathematical analysis for facial point digitization. This study proposes facial expression recognition in human with the application of Multi-points Warping for 3D facial landmark by building a template mesh as a reference object. This template mesh is thereby applied to each of the target mesh on Stirling/ESRC and Bosphorus datasets. The semi-landmarks are allowed to slide along tangents to the curves and surfaces until the bending energy between a template and a target form is minimal and localization error is assessed using Procrustes ANOVA. By using Principal Component Analysis (PCA) for feature selection, classification is done using Linear Discriminant Analysis (LDA). Result The localization error is validated on the two datasets with superior performance over the state-of-the-art methods and variation in the expression is visualized using Principal Components (PCs). The deformations show various expression regions in the faces. The results indicate that Sad expression has the lowest recognition accuracy on both datasets. The classifier achieved a recognition accuracy of 99.58 and 99.32% on Stirling/ESRC and Bosphorus, respectively. Conclusion The results demonstrate that the method is robust and in agreement with the state-of-the-art results.

2017 ◽  
Vol 77 (1) ◽  
pp. 917-937 ◽  
Author(s):  
Muhammad Hameed Siddiqi ◽  
Maqbool Ali ◽  
Mohamed Elsayed Abdelrahman Eldib ◽  
Asfandyar Khan ◽  
Oresti Banos ◽  
...  

Electronics ◽  
2019 ◽  
Vol 8 (4) ◽  
pp. 385 ◽  
Author(s):  
Ying Chen ◽  
Zhihao Zhang ◽  
Lei Zhong ◽  
Tong Chen ◽  
Juxiang Chen ◽  
...  

Near-infrared (NIR) facial expression recognition is resistant to illumination change. In this paper, we propose a three-stream three-dimensional convolution neural network with a squeeze-and-excitation (SE) block for NIR facial expression recognition. We fed each stream with different local regions, namely the eyes, nose, and mouth. By using an SE block, the network automatically allocated weights to different local features to further improve recognition accuracy. The experimental results on the Oulu-CASIA NIR facial expression database showed that the proposed method has a higher recognition rate than some state-of-the-art algorithms.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Qing Lin ◽  
Ruili He ◽  
Peihe Jiang

State-of-the-art facial expression methods outperform human beings, especially, thanks to the success of convolutional neural networks (CNNs). However, most of the existing works focus mainly on analyzing an adult’s face and ignore the important problems: how can we recognize facial expression from a baby’s face image and how difficult is it? In this paper, we first introduce a new face image database, named BabyExp, which contains 12,000 images from babies younger than two years old, and each image is with one of three facial expressions (i.e., happy, sad, and normal). To the best of our knowledge, the proposed dataset is the first baby face dataset for analyzing a baby’s face image, which is complementary to the existing adult face datasets and can shed some light on exploring baby face analysis. We also propose a feature guided CNN method with a new loss function, called distance loss, to optimize interclass distance. In order to facilitate further research, we provide the benchmark of expression recognition on the BabyExp dataset. Experimental results show that the proposed network achieves the recognition accuracy of 87.90% on BabyExp.


Symmetry ◽  
2019 ◽  
Vol 11 (1) ◽  
pp. 52 ◽  
Author(s):  
Xianzhang Pan ◽  
Wenping Guo ◽  
Xiaoying Guo ◽  
Wenshu Li ◽  
Junjie Xu ◽  
...  

The proposed method has 30 streams, i.e., 15 spatial streams and 15 temporal streams. Each spatial stream corresponds to each temporal stream. Therefore, this work correlates with the symmetry concept. It is a difficult task to classify video-based facial expression owing to the gap between the visual descriptors and the emotions. In order to bridge the gap, a new video descriptor for facial expression recognition is presented to aggregate spatial and temporal convolutional features across the entire extent of a video. The designed framework integrates a state-of-the-art 30 stream and has a trainable spatial–temporal feature aggregation layer. This framework is end-to-end trainable for video-based facial expression recognition. Thus, this framework can effectively avoid overfitting to the limited emotional video datasets, and the trainable strategy can learn to better represent an entire video. The different schemas for pooling spatial–temporal features are investigated, and the spatial and temporal streams are best aggregated by utilizing the proposed method. The extensive experiments on two public databases, BAUM-1s and eNTERFACE05, show that this framework has promising performance and outperforms the state-of-the-art strategies.


2018 ◽  
Vol 27 (08) ◽  
pp. 1850121 ◽  
Author(s):  
Zhe Sun ◽  
Zheng-Ping Hu ◽  
Raymond Chiong ◽  
Meng Wang ◽  
Wei He

Recent research has demonstrated the effectiveness of deep subspace learning networks, including the principal component analysis network (PCANet) and linear discriminant analysis network (LDANet), since they can extract high-level features and better represent abstract semantics of given data. However, their representation does not consider the nonlinear relationship of data and limits the use of features with nonlinear metrics. In this paper, we propose a novel architecture combining the kernel collaboration representation with deep subspace learning based on the PCANet and LDANet for facial expression recognition. First, the PCANet and LDANet are employed to learn abstract features. These features are then mapped to the kernel space to effectively capture their nonlinear similarities. Finally, we develop a simple yet effective classification method with squared [Formula: see text]-regularization, which improves the recognition accuracy and reduces time complexity. Comprehensive experimental results based on the JAFFE, CK[Formula: see text], KDEF and CMU Multi-PIE datasets confirm that our proposed approach has superior performance not just in terms of accuracy, but it is also robust against block occlusion and varying parameter configurations.


2021 ◽  
Vol 7 ◽  
pp. e736
Author(s):  
Olufisayo Ekundayo ◽  
Serestina Viriri

Facial Expression Recognition (FER) has gained considerable attention in affective computing due to its vast area of applications. Diverse approaches and methods have been considered for a robust FER in the field, but only a few works considered the intensity of emotion embedded in the expression. Even the available studies on expression intensity estimation successfully assigned a nominal/regression value or classified emotion in a range of intervals. Most of the available works on facial expression intensity estimation successfully present only the emotion intensity estimation. At the same time, others proposed methods that predict emotion and its intensity in different channels. These multiclass approaches and extensions do not conform to man heuristic manner of recognising emotion and its intensity estimation. This work presents a Multilabel Convolution Neural Network (ML-CNN)-based model, which could simultaneously recognise emotion and provide ordinal metrics as the intensity estimation of the emotion. The proposed ML-CNN is enhanced with the aggregation of Binary Cross-Entropy (BCE) loss and Island Loss (IL) functions to minimise intraclass and interclass variations. Also, ML-CNN model is pre-trained with Visual Geometric Group (VGG-16) to control overfitting. In the experiments conducted on Binghampton University 3D Facial Expression (BU-3DFE) and Cohn Kanade extension (CK+) datasets, we evaluate ML-CNN’s performance based on accuracy and loss. We also carried out a comparative study of our model with some popularly used multilabel algorithms using standard multilabel metrics. ML-CNN model simultaneously predicts emotion and intensity estimation using ordinal metrics. The model also shows appreciable and superior performance over four standard multilabel algorithms: Chain Classifier (CC), distinct Random K label set (RAKEL), Multilabel K Nearest Neighbour (MLKNN) and Multilabel ARAM (MLARAM).


Algorithms ◽  
2019 ◽  
Vol 12 (11) ◽  
pp. 227 ◽  
Author(s):  
Yingying Wang ◽  
Yibin Li ◽  
Yong Song ◽  
Xuewen Rong

In recent years, with the development of artificial intelligence and human–computer interaction, more attention has been paid to the recognition and analysis of facial expressions. Despite much great success, there are a lot of unsatisfying problems, because facial expressions are subtle and complex. Hence, facial expression recognition is still a challenging problem. In most papers, the entire face image is often chosen as the input information. In our daily life, people can perceive other’s current emotions only by several facial components (such as eye, mouth and nose), and other areas of the face (such as hair, skin tone, ears, etc.) play a smaller role in determining one’s emotion. If the entire face image is used as the only input information, the system will produce some unnecessary information and miss some important information in the process of feature extraction. To solve the above problem, this paper proposes a method that combines multiple sub-regions and the entire face image by weighting, which can capture more important feature information that is conducive to improving the recognition accuracy. Our proposed method was evaluated based on four well-known publicly available facial expression databases: JAFFE, CK+, FER2013 and SFEW. The new method showed better performance than most state-of-the-art methods.


Author(s):  
Yiming Wang ◽  
Xinghui Dong ◽  
Gongfa Li ◽  
Junyu Dong ◽  
Hui Yu

AbstractFacial expression recognition has seen rapid development in recent years due to its wide range of applications such as human–computer interaction, health care, and social robots. Although significant progress has been made in this field, it is still challenging to recognize facial expressions with occlusions and large head-poses. To address these issues, this paper presents a cascade regression-based face frontalization (CRFF) method, which aims to immediately reconstruct a clean, frontal and expression-aware face given an in-the-wild facial image. In the first stage, a frontal facial shape is predicted by developing a cascade regression model to learn the pairwise spatial relation between non-frontal face-shape and its frontal counterpart. Unlike most existing shape prediction methods that used single-step regression, the cascade model is a multi-step regressor that gradually aligns non-frontal shape to its frontal view. We employ several different regressors and make a ensemble decision to boost prediction performance. For facial texture reconstruction, active appearance model instantiation is employed to warp the input face to the predicted frontal shape and generate a clean face. To remove occlusions, we train this generative model on manually selected clean-face sets, which ensures generating a clean face as output regardless of whether the input face involves occlusions or not. Unlike the existing face reconstruction methods that are computational expensive, the proposed method works in real time, so it is suitable for dynamic analysis of facial expression. The experimental validation shows that the ensembling cascade model has improved frontal shape prediction accuracy for an average of 5% and the proposed method has achieved superior performance on both static and dynamic recognition of facial expressions over the state-of-the-art approaches. The experimental results demonstrate that the proposed method has achieved expression-preserving frontalization, de-occlusion and has improved performance of facial expression recognition.


2019 ◽  
Vol 8 (4) ◽  
pp. 9782-9787

Facial Expression Recognition is an important undertaking for the machinery to recognize different expressive alterations in individual. Emotions have a strong relationship with our behavior. Human emotions are discrete reactions to inside or outside occasions which have some importance meaning. Involuntary sentiment detection is a process to understand the individual’s expressive state to identify his intensions from facial expression which is also a noteworthy piece of non-verbal correspondence. In this paper we propose a Framework that combines discriminative features discovered using Convolutional Neural Networks (CNN) to enhance the performance and accuracy of Facial Expression Recognition. For this we have implemented Inception V3 pre-trained architecture of CNN and then applying concatenation of intermediate layer with final layer which is further passing through fully connected layer to perform classification. We have used JAFFE (Japanese Female Facial Expression) Dataset for this purpose and Experimental results show that our proposed method shows better performance and improve the recognition accuracy.


Sign in / Sign up

Export Citation Format

Share Document