Emotional Facial Expression Recognition Task

2010 ◽  
Author(s):  
M. Fischer-Shofty ◽  
S. G. Shamay-Tsoorya ◽  
H. Harari ◽  
Y. Levkovitz
Fractals ◽  
2002 ◽  
Vol 10 (01) ◽  
pp. 47-52 ◽  
Author(s):  
TAKUMA TAKEHARA ◽  
FUMIO OCHIAI ◽  
NAOTO SUZUKI

Following the Mandelbrot's theory of fractals, many shapes and phenomena in nature have been suggested to be fractal. Even animal behavior and human physiological responses can also be represented as fractal. Here, we show the evidence that it is possible to apply the concept of fractals even to the facial expression recognition, which is one of the most important parts of human recognition. Rating data derived from judging morphed facial images were represented in the two-dimensional psychological space by multidimensional scaling of four different scales. The resultant perimeter of the structure of the emotion circumplex was fluctuated and was judged to have a fractal dimension of 1.18. The smaller the unit of measurement, the longer the length of the perimeter of the circumplex. In this study, we provide interdisciplinarily important evidence of fractality through its application to facial expression recognition.


2005 ◽  
Vol 50 (9) ◽  
pp. 525-533 ◽  
Author(s):  
Benoit Bediou ◽  
Pierre Krolak-Salmon ◽  
Mohamed Saoud ◽  
Marie-Anne Henaff ◽  
Michael Burt ◽  
...  

Background: Impaired facial expression recognition in schizophrenia patients contributes to abnormal social functioning and may predict functional outcome in these patients. Facial expression processing involves individual neural networks that have been shown to malfunction in schizophrenia. Whether these patients have a selective deficit in facial expression recognition or a more global impairment in face processing remains controversial. Objective: To investigate whether patients with schizophrenia exhibit a selective impairment in facial emotional expression recognition, compared with patients with major depression and healthy control subjects. Methods: We studied performance in facial expression recognition and facial sex recognition paradigms, using original morphed faces, in a population with schizophrenia ( n = 29) and compared their scores with those of depression patients ( n = 20) and control subjects ( n = 20). Results: Schizophrenia patients achieved lower scores than both other groups in the expression recognition task, particularly in fear and disgust recognition. Sex recognition was unimpaired. Conclusion: Facial expression recognition is impaired in schizophrenia, whereas sex recognition is preserved, which highly suggests an abnormal processing of changeable facial features in this disease. A dysfunction of the top-down retrograde modulation coming from limbic and paralimbic structures on visual areas is hypothesized.


2019 ◽  
Vol 16 (04) ◽  
pp. 1941002 ◽  
Author(s):  
Jing Li ◽  
Yang Mi ◽  
Gongfa Li ◽  
Zhaojie Ju

Facial expression recognition has been widely used in human computer interaction (HCI) systems. Over the years, researchers have proposed different feature descriptors, implemented different classification methods, and carried out a number of experiments on various datasets for automatic facial expression recognition. However, most of them used 2D static images or 2D video sequences for the recognition task. The main limitations of 2D-based analysis are problems associated with variations in pose and illumination, which reduce the recognition accuracy. Therefore, an alternative way is to incorporate depth information acquired by 3D sensor, because it is invariant in both pose and illumination. In this paper, we present a two-stream convolutional neural network (CNN)-based facial expression recognition system and test it on our own RGB-D facial expression dataset collected by Microsoft Kinect for XBOX in unspontaneous scenarios since Kinect is an inexpensive and portable device to capture both RGB and depth information. Our fully annotated dataset includes seven expressions (i.e., neutral, sadness, disgust, fear, happiness, anger, and surprise) for 15 subjects (9 males and 6 females) aged from 20 to 25. The two individual CNNs are identical in architecture but do not share parameters. To combine the detection results produced by these two CNNs, we propose the late fusion approach. The experimental results demonstrate that the proposed two-stream network using RGB-D images is superior to that of using only RGB images or depth images.


2014 ◽  
Vol 20 (5) ◽  
pp. 496-505 ◽  
Author(s):  
Laura Alonso-Recio ◽  
Pilar Martín-Plasencia ◽  
Ángela Loeches-Alonso ◽  
Juan M. Serrano-Rodríguez

AbstractFacial expression recognition impairment has been reported in Parkinson’s disease. While some authors have referred to specific emotional disabilities, others view them as secondary to executive deficits frequently described in the disease, such as working memory. The present study aims to analyze the relationship between working memory and facial expression recognition abilities in Parkinson’s disease. We observed 50 patients with Parkinson’s disease and 49 healthy controls by means of an n-back procedure with four types of stimuli: emotional facial expressions, gender, spatial locations, and non-sense syllables. Other executive and visuospatial neuropsychological tests were also administered. Results showed that Parkinson’s disease patients with high levels of disability performed worse than healthy individuals on the emotional facial expression and spatial location tasks. Moreover, spatial location task performance was correlated with executive neuropsychological scores, but emotional facial expression was not. Thus, working memory seems to be altered in Parkinson’s disease, particularly in tasks that involve the appreciation of spatial relationships in stimuli. Additionally, non-executive, facial emotional recognition difficulty seems to be present and related to disease progression. (JINS, 2014, 20, 1–10)


10.2196/13810 ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. e13810 ◽  
Author(s):  
Anish Nag ◽  
Nick Haber ◽  
Catalin Voss ◽  
Serena Tamura ◽  
Jena Daniels ◽  
...  

Background Several studies have shown that facial attention differs in children with autism. Measuring eye gaze and emotion recognition in children with autism is challenging, as standard clinical assessments must be delivered in clinical settings by a trained clinician. Wearable technologies may be able to bring eye gaze and emotion recognition into natural social interactions and settings. Objective This study aimed to test: (1) the feasibility of tracking gaze using wearable smart glasses during a facial expression recognition task and (2) the ability of these gaze-tracking data, together with facial expression recognition responses, to distinguish children with autism from neurotypical controls (NCs). Methods We compared the eye gaze and emotion recognition patterns of 16 children with autism spectrum disorder (ASD) and 17 children without ASD via wearable smart glasses fitted with a custom eye tracker. Children identified static facial expressions of images presented on a computer screen along with nonsocial distractors while wearing Google Glass and the eye tracker. Faces were presented in three trials, during one of which children received feedback in the form of the correct classification. We employed hybrid human-labeling and computer vision–enabled methods for pupil tracking and world–gaze translation calibration. We analyzed the impact of gaze and emotion recognition features in a prediction task aiming to distinguish children with ASD from NC participants. Results Gaze and emotion recognition patterns enabled the training of a classifier that distinguished ASD and NC groups. However, it was unable to significantly outperform other classifiers that used only age and gender features, suggesting that further work is necessary to disentangle these effects. Conclusions Although wearable smart glasses show promise in identifying subtle differences in gaze tracking and emotion recognition patterns in children with and without ASD, the present form factor and data do not allow for these differences to be reliably exploited by machine learning systems. Resolving these challenges will be an important step toward continuous tracking of the ASD phenotype.


2015 ◽  
Vol 6 ◽  
Author(s):  
Takashi Okada ◽  
Yasutaka Kubota ◽  
Wataru Sato ◽  
Toshiya Murai ◽  
Fréderic Pellion ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document