Improved Performance in Facial Expression Recognition Using 32 Geometric Features

Author(s):  
Giuseppe Palestra ◽  
Adriana Pettinicchio ◽  
Marco Del Coco ◽  
Pierluigi Carcagnì ◽  
Marco Leo ◽  
...  
Author(s):  
Michael Thiruthuvanathan ◽  
◽  
Balachandran Krishnan ◽  

Recognizing facial features to detect emotions has always been an interesting topic for research in the field of Computer vision and cognitive emotional analysis. In this research a model to detect and classify emotions is explored, using Deep Convolutional Neural Networks (DCNN). This model intends to classify the primary emotions (Anger, Disgust, Fear, Happy, Sad, Surprise and Neutral) using progressive learning model for a Facial Expression Recognition (FER) System. The proposed model (EmoNet) is developed based on a linear growing-shrinking filter method that shows prominent extraction of robust features for learning and interprets emotional classification for an improved accuracy. EmoNet incorporates Progressive- Resizing (PR) of images to accommodate improved learning traits from emotional datasets by adding more image data for training and Validation which helped in improving the model’s accuracy by 5%. Cross validations were carried out on the model, this enabled the model to be ready for testing on new data. EmoNet results signifies improved performance with respect to accuracy, precision and recall due to the incorporation of progressive learning Framework, Tuning Hyper parameters of the network, Image Augmentation and moderating generalization and Bias on the images. These parameters are compared with the existing models of Emotional analysis with the various datasets that are prominently available for research. The Methods, Image Data and the Fine-tuned model combinedly contributed in achieving 83.6%, 78.4%, 98.1% and 99.5% on FER2013, IMFDB, CK+ and JAFFE respectively. EmoNet has worked on four different datasets and achieved an overall accuracy of 90%.


Sign in / Sign up

Export Citation Format

Share Document