scholarly journals Facial Expression Identification using Regularized Supervised Distance Preserving Projection

2021 ◽  
Vol 69 (2) ◽  
pp. 70-75
Author(s):  
Sohana Jahan ◽  
Moriyam Akter ◽  
Sifta Yeasmin ◽  
Farhana Ahmed Simi

Facial expression recognition is one of the most reliable and a key technology of advanced human-computer interaction with the rapid development of computer vision and artificial intelligence. Nowadays, there has been a growing interest in improving expression recognition techniques. In most of the cases, automatic recognition system’s efficiency depends on the represented facial expression feature. Even the best classifier may fail to achieve a good recognition rate if inadequate features are provided. Therefore, feature extraction is a crucial step of the facial expression recognition process. In this paper, we have used Regularized Supervised Distance Preserving Projection for extracting the best features of the images. Numerical experiment shows that the use of this technique outperforms many of state of art approaches in terms of recognition rate. Dhaka Univ. J. Sci. 69(2): 70-75, 2021 (July)

2021 ◽  
Vol 2083 (3) ◽  
pp. 032030
Author(s):  
Cui Dong ◽  
Rongfu Wang ◽  
Yuanqin Hang

Abstract With the development of artificial intelligence, facial expression recognition based on deep learning has become a current research hotspot. The article analyzes and improves the VGG16 network. First, the three fully connected layers of the original network are changed to two convolutional layers and one fully connected layer, which reduces the complexity of the network; Then change the maximum pooling in the network to local-based adaptive pooling to help the network select feature information that is more conducive to facial expression recognition, so that the network can be used on the facial expression datasets RAF-DB and SFEW. The recognition rate increased by 4.7% and 7% respectively.


2014 ◽  
Vol 543-547 ◽  
pp. 2350-2353
Author(s):  
Xiao Yan Wan

In order to extract the expression features of critically ill patients, and realize the computer intelligent nursing, an improved facial expression recognition method is proposed based on the of active appearance model, the support vector machine (SVM) for facial expression recognition is taken in research, and the face recognition model structure active appearance model is designed, and the attribute reduction algorithm of rough set affine transformation theory is introduced, and the invalid and redundant feature points are removed. The critically ill patient expressions are classified and recognized based on the support vector machine (SVM). The face image attitudes are adjusted, and the self-adaptive performance of facial expression recognition for the critical patient attitudes is improved. New method overcomes the effect of patient attitude to the recognition rate to a certain extent. The highest average recognition rate can be increased about 7%. The intelligent monitoring and nursing care of critically ill patients are realized with the computer vision effect. The nursing quality is enhanced, and it ensures the timely treatment of rescue.


2020 ◽  
pp. 57-63
Author(s):  
admin admin ◽  
◽  
◽  
◽  
◽  
...  

The human facial emotions recognition has attracted interest in the field of Artificial Intelligence. The emotions on a human face depicts what’s going on inside the mind. Facial expression recognition is the part of Facial recognition which is gaining more importance and need for it increases tremendously. Though there are methods to identify expressions using machine learning and Artificial Intelligence techniques, this work attempts to use convolution neural networks to recognize expressions and classify the expressions into 6 emotions categories. Various datasets are investigated and explored for training expression recognition models are explained in this paper and the models which are used in this paper are VGG 19 and RESSNET 18. We included facial emotional recognition with gender identification also. In this project we have used fer2013 and ck+ dataset and ultimately achieved 73% and 94% around accuracies respectively.


Author(s):  
Gopal Krishan Prajapat ◽  
Rakesh Kumar

Facial feature extraction and recognition plays a prominent role in human non-verbal interaction and it is one of the crucial factors among pose, speech, facial expression, behaviour and actions which are used in conveying information about the intentions and emotions of a human being. In this article an extended local binary pattern is used for the feature extraction process and a principal component analysis (PCA) is used for dimensionality reduction. The projections of the sample and model images are calculated and compared by Euclidean distance method. The combination of extended local binary pattern and PCA (ELBP+PCA) improves the accuracy of the recognition rate and also diminishes the evaluation complexity. The evaluation of proposed facial expression recognition approach will focus on the performance of the recognition rate. A series of tests are performed for the validation of algorithms and to compare the accuracy of the methods on the JAFFE, Extended Cohn-Kanade images database.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Yifeng Zhao ◽  
Deyun Chen

Aiming at the problem of facial expression recognition under unconstrained conditions, a facial expression recognition method based on an improved capsule network model is proposed. Firstly, the expression image is normalized by illumination based on the improved Weber face, and the key points of the face are detected by the Gaussian process regression tree. Then, the 3dmms model is introduced. The 3D face shape, which is consistent with the face in the image, is provided by iterative estimation so as to further improve the image quality of face pose standardization. In this paper, we consider that the convolution features used in facial expression recognition need to be trained from the beginning and add as many different samples as possible in the training process. Finally, this paper attempts to combine the traditional deep learning technology with capsule configuration, adds an attention layer after the primary capsule layer in the capsule network, and proposes an improved capsule structure model suitable for expression recognition. The experimental results on JAFFE and BU-3DFE datasets show that the recognition rate can reach 96.66% and 80.64%, respectively.


Author(s):  
YU-YI LIAO ◽  
JZAU-SHENG LIN ◽  
SHEN-CHUAN TAI

In this paper, a facial expression recognition system based on cerebella model articulation controller with a clustering memory (CMAC-CM) is presented. Firstly, the facial expression features were automatically preprocessed and extracted from given still images in the JAFFE database in which the frontal view of faces were contained. Next, a block of lower frequency DCT coefficients was obtained by subtracting a neutral image from a given expression image and rearranged as input vectors to be fed into the CMAC-CM that can rapidly obtain output using nonlinear mapping with a look-up table in training or recognizing phase. Finally, the experimental results have demonstrated recognition rates with various block sizes of coefficients in lower frequency and cluster sizes of weight memory. A mean recognition rate of 92.86% is achieved for the testing images. CMAC-CM takes 0.028 seconds for test image in testing phase.


2011 ◽  
Vol 121-126 ◽  
pp. 617-621 ◽  
Author(s):  
Chang Yi Kao ◽  
Chin Shyurng Fahn

During the development of the facial expression classification procedure, we evaluate three machine learning methods. We combine ABAs with CARTs, which selects weak classifiers and integrates them into a strong classifier automatically. We have presented a highly automatic facial expression recognition system in which a face detection procedure is first able to detect and locate human faces in image sequences acquired in real environments. We need not label or choose characteristic blocks in advance. In the face detection procedure, some geometrical properties are applied to eliminate the skin color regions that do not belong to human faces. In the facial feature extraction procedure, we only perform both the binarization and edge detection operations on the proper ranges of eyes, mouth, and eyebrows to obtain the 16 landmarks of a human face to further produce 16 characteristic distances which represent a kind of expressions. We realize a facial expression classification procedure by employing an ABA to recognize six kinds of expressions. The performance of the system is very satisfactory; whose recognition rate achieves more than 90%.


2015 ◽  
Vol 742 ◽  
pp. 257-260 ◽  
Author(s):  
Li Sai Li ◽  
Zi Lu Ying ◽  
Bin Bin Huang

This paper was proposed a new algorithm for Facial Expression Recognition (FER) which was based on fusion of gabor texture features and Centre Binary Pattern (CBP). Firstly, gabor texture feature were extracted from every expression image. Five scales and eight orientations of gabor wavelet filters were used to extract gabor texture features. Then the CBP features were extracted from gabor feature images and adaboost algorithm was used to select final features from CBP feature images. Finally, we obtain expression recognition results on the final expression features by Sparse Representation-based Classification (SRC) method. The experiment results on Japanese Female Facial Expression (JAFFE) database demonstrated that the new algorithm had a much higher recognition rate than the traditional algorithms.


Author(s):  
Padmapriya K.C. ◽  
Leelavathy V. ◽  
Angelin Gladston

The human facial expressions convey a lot of information visually. Facial expression recognition plays a crucial role in the area of human-machine interaction. Automatic facial expression recognition system has many applications in human behavior understanding, detection of mental disorders and synthetic human expressions. Recognition of facial expression by computer with high recognition rate is still a challenging task. Most of the methods utilized in the literature for the automatic facial expression recognition systems are based on geometry and appearance. Facial expression recognition is usually performed in four stages consisting of pre-processing, face detection, feature extraction, and expression classification. In this paper we applied various deep learning methods to classify the seven key human emotions: anger, disgust, fear, happiness, sadness, surprise and neutrality. The facial expression recognition system developed is experimentally evaluated with FER dataset and has resulted with good accuracy.


Sign in / Sign up

Export Citation Format

Share Document