Survey Paper on Gender and Emotion Classification using Facial Expression Detection

2020 ◽  
Author(s):  
Aishwarya Gupta ◽  
Devashish Sharma ◽  
Shaurya Sharma ◽  
Anushree Agarwal
2005 ◽  
Vol 58 (7) ◽  
pp. 1173-1197 ◽  
Author(s):  
Naomi C. Carroll ◽  
Andrew W. Young

Four experiments investigated priming of emotion recognition using a range of emotional stimuli, including facial expressions, words, pictures, and nonverbal sounds. In each experiment, a prime–target paradigm was used with related, neutral, and unrelated pairs. In Experiment 1, facial expression primes preceded word targets in an emotion classification task. A pattern of priming of emotional word targets by related primes with no inhibition of unrelated primes was found. Experiment 2 reversed these primes and targets and found the same pattern of results, demonstrating bidirectional priming between facial expressions and words. Experiment 2 also found priming of facial expression targets by picture primes. Experiment 3 demonstrated that priming occurs not just between pairs of stimuli that have a high co-occurrence in the environment (for example, nonverbal sounds and facial expressions), but with stimuli that co-occur less frequently and are linked mainly by their emotional category (for example, nonverbal sounds and printed words). This shows the importance of the prime and target sharing a common emotional category, rather than their previous co-occurrence. Experiment 4 extended the findings by showing that there are category-based effects as well as valence effects in emotional priming, supporting a categorical view of emotion recognition.


Facial Expression Recognition (FER) has gained significant importance in the research field of Affective Computing in different extents. As a part of the different dimensional thinking, aiming at improving the accuracy of the recognition system and reducing the computational load, region based FER is proposed in this paper. The system is an emotion identifying system among the basic emotions, through subject independent template matching based on gradient directions. The model designed is tested on the Enhanced Cohn-Kanade (CK+) dataset. Another important contribution of the work is using only eye (including eyebrows and the nose portion near eyes) and mouth regions in the emotion recognition. The emotion classification result is 94.3% (CK+ dataset) for 6-class FER.


2012 ◽  
Vol 3 (1) ◽  
pp. 18-32 ◽  
Author(s):  
Marcello Mortillaro ◽  
Ben Meuleman ◽  
Klaus R. Scherer

Most models of automatic emotion recognition use a discrete perspective and a black-box approach, i.e., they output an emotion label chosen from a limited pool of candidate terms, on the basis of purely statistical methods. Although these models are successful in emotion classification, a number of practical and theoretical drawbacks limit the range of possible applications. In this paper, the authors suggest the adoption of an appraisal perspective in modeling emotion recognition. The authors propose to use appraisals as an intermediate layer between expressive features (input) and emotion labeling (output). The model would then be made of two parts: first, expressive features would be used to estimate appraisals; second, resulting appraisals would be used to predict an emotion label. While the second part of the model has already been the object of several studies, the first is unexplored. The authors argue that this model should be built on the basis of both theoretical predictions and empirical results about the link between specific appraisals and expressive features. For this purpose, the authors suggest to use the component process model of emotion, which includes detailed predictions of efferent effects of appraisals on facial expression, voice, and body movements.


Sign in / Sign up

Export Citation Format

Share Document