scholarly journals Dynamic Facial Expression Recognition Using Sparse Reserved Projection Algorithm for Low Illumination Images

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Hui Li

In this paper, a novel approach for facial expression recognition based on sparse retained projection is proposed. The locality preserving projection (LPP) algorithm is used to reduce the dimension of face image data that ensures the local near-neighbor relationship of face images. The sparse representation method is used to solve the partial occlusion of human face and the problem of light imbalance. Through sparse reconstruction, the sparse reconstruction information of expression is retained as well as the local neighborhood information of expression, which can extract more effective and judgmental internal features from the original expression data, and the obtained projection is relatively stable. The recognition results based on CK + expression database show that this method can effectively improve the facial expression recognition rate.

Author(s):  
Michael Thiruthuvanathan ◽  
◽  
Balachandran Krishnan ◽  

Recognizing facial features to detect emotions has always been an interesting topic for research in the field of Computer vision and cognitive emotional analysis. In this research a model to detect and classify emotions is explored, using Deep Convolutional Neural Networks (DCNN). This model intends to classify the primary emotions (Anger, Disgust, Fear, Happy, Sad, Surprise and Neutral) using progressive learning model for a Facial Expression Recognition (FER) System. The proposed model (EmoNet) is developed based on a linear growing-shrinking filter method that shows prominent extraction of robust features for learning and interprets emotional classification for an improved accuracy. EmoNet incorporates Progressive- Resizing (PR) of images to accommodate improved learning traits from emotional datasets by adding more image data for training and Validation which helped in improving the model’s accuracy by 5%. Cross validations were carried out on the model, this enabled the model to be ready for testing on new data. EmoNet results signifies improved performance with respect to accuracy, precision and recall due to the incorporation of progressive learning Framework, Tuning Hyper parameters of the network, Image Augmentation and moderating generalization and Bias on the images. These parameters are compared with the existing models of Emotional analysis with the various datasets that are prominently available for research. The Methods, Image Data and the Fine-tuned model combinedly contributed in achieving 83.6%, 78.4%, 98.1% and 99.5% on FER2013, IMFDB, CK+ and JAFFE respectively. EmoNet has worked on four different datasets and achieved an overall accuracy of 90%.


2014 ◽  
Vol 543-547 ◽  
pp. 2350-2353
Author(s):  
Xiao Yan Wan

In order to extract the expression features of critically ill patients, and realize the computer intelligent nursing, an improved facial expression recognition method is proposed based on the of active appearance model, the support vector machine (SVM) for facial expression recognition is taken in research, and the face recognition model structure active appearance model is designed, and the attribute reduction algorithm of rough set affine transformation theory is introduced, and the invalid and redundant feature points are removed. The critically ill patient expressions are classified and recognized based on the support vector machine (SVM). The face image attitudes are adjusted, and the self-adaptive performance of facial expression recognition for the critical patient attitudes is improved. New method overcomes the effect of patient attitude to the recognition rate to a certain extent. The highest average recognition rate can be increased about 7%. The intelligent monitoring and nursing care of critically ill patients are realized with the computer vision effect. The nursing quality is enhanced, and it ensures the timely treatment of rescue.


Author(s):  
Gopal Krishan Prajapat ◽  
Rakesh Kumar

Facial feature extraction and recognition plays a prominent role in human non-verbal interaction and it is one of the crucial factors among pose, speech, facial expression, behaviour and actions which are used in conveying information about the intentions and emotions of a human being. In this article an extended local binary pattern is used for the feature extraction process and a principal component analysis (PCA) is used for dimensionality reduction. The projections of the sample and model images are calculated and compared by Euclidean distance method. The combination of extended local binary pattern and PCA (ELBP+PCA) improves the accuracy of the recognition rate and also diminishes the evaluation complexity. The evaluation of proposed facial expression recognition approach will focus on the performance of the recognition rate. A series of tests are performed for the validation of algorithms and to compare the accuracy of the methods on the JAFFE, Extended Cohn-Kanade images database.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Yifeng Zhao ◽  
Deyun Chen

Aiming at the problem of facial expression recognition under unconstrained conditions, a facial expression recognition method based on an improved capsule network model is proposed. Firstly, the expression image is normalized by illumination based on the improved Weber face, and the key points of the face are detected by the Gaussian process regression tree. Then, the 3dmms model is introduced. The 3D face shape, which is consistent with the face in the image, is provided by iterative estimation so as to further improve the image quality of face pose standardization. In this paper, we consider that the convolution features used in facial expression recognition need to be trained from the beginning and add as many different samples as possible in the training process. Finally, this paper attempts to combine the traditional deep learning technology with capsule configuration, adds an attention layer after the primary capsule layer in the capsule network, and proposes an improved capsule structure model suitable for expression recognition. The experimental results on JAFFE and BU-3DFE datasets show that the recognition rate can reach 96.66% and 80.64%, respectively.


Author(s):  
YU-YI LIAO ◽  
JZAU-SHENG LIN ◽  
SHEN-CHUAN TAI

In this paper, a facial expression recognition system based on cerebella model articulation controller with a clustering memory (CMAC-CM) is presented. Firstly, the facial expression features were automatically preprocessed and extracted from given still images in the JAFFE database in which the frontal view of faces were contained. Next, a block of lower frequency DCT coefficients was obtained by subtracting a neutral image from a given expression image and rearranged as input vectors to be fed into the CMAC-CM that can rapidly obtain output using nonlinear mapping with a look-up table in training or recognizing phase. Finally, the experimental results have demonstrated recognition rates with various block sizes of coefficients in lower frequency and cluster sizes of weight memory. A mean recognition rate of 92.86% is achieved for the testing images. CMAC-CM takes 0.028 seconds for test image in testing phase.


2011 ◽  
Vol 121-126 ◽  
pp. 617-621 ◽  
Author(s):  
Chang Yi Kao ◽  
Chin Shyurng Fahn

During the development of the facial expression classification procedure, we evaluate three machine learning methods. We combine ABAs with CARTs, which selects weak classifiers and integrates them into a strong classifier automatically. We have presented a highly automatic facial expression recognition system in which a face detection procedure is first able to detect and locate human faces in image sequences acquired in real environments. We need not label or choose characteristic blocks in advance. In the face detection procedure, some geometrical properties are applied to eliminate the skin color regions that do not belong to human faces. In the facial feature extraction procedure, we only perform both the binarization and edge detection operations on the proper ranges of eyes, mouth, and eyebrows to obtain the 16 landmarks of a human face to further produce 16 characteristic distances which represent a kind of expressions. We realize a facial expression classification procedure by employing an ABA to recognize six kinds of expressions. The performance of the system is very satisfactory; whose recognition rate achieves more than 90%.


2015 ◽  
Vol 742 ◽  
pp. 257-260 ◽  
Author(s):  
Li Sai Li ◽  
Zi Lu Ying ◽  
Bin Bin Huang

This paper was proposed a new algorithm for Facial Expression Recognition (FER) which was based on fusion of gabor texture features and Centre Binary Pattern (CBP). Firstly, gabor texture feature were extracted from every expression image. Five scales and eight orientations of gabor wavelet filters were used to extract gabor texture features. Then the CBP features were extracted from gabor feature images and adaboost algorithm was used to select final features from CBP feature images. Finally, we obtain expression recognition results on the final expression features by Sparse Representation-based Classification (SRC) method. The experiment results on Japanese Female Facial Expression (JAFFE) database demonstrated that the new algorithm had a much higher recognition rate than the traditional algorithms.


Sign in / Sign up

Export Citation Format

Share Document