scholarly journals Differences in Facial Expression Recognition Between Unipolar and Bipolar Depression

2021 ◽  
Vol 12 ◽  
Author(s):  
Ma Ruihua ◽  
Zhao Meng ◽  
Chen Nan ◽  
Liu Panqi ◽  
Guo Hua ◽  
...  

PurposeTo explore the differences in facial emotion recognition among patients with unipolar depression (UD), bipolar depression (BD), and normal controls.MethodsThirty patients with UD and 30 patients with BD, respectively, were recruited in Zhumadian Second People’s Hospital from July 2018 to August 2019. Fifteen groups of facial expressions including happiness, sadness, anger, surprise, fear, and disgust were identified.ResultsA single-factor ANOVA was used to analyze the facial expression recognition results of the three groups, and the differences were found in the happy-sad (P = 0.009), happy-angry (P = 0.001), happy-surprised (P = 0.034), and disgust-surprised (P = 0.038) facial expression groups. The independent sample T-test analysis showed that compared with the normal control group, there were differences in the happy-sad (P = 0.009) and happy-angry (P = 0.009) groups in patients with BD, and the accuracy of facial expression recognition was lower than the normal control group. Compared with patients with UD, there were differences between the happy-sad (P = 0.005) and happy-angry (P = 0.002) groups, and the identification accuracy of patients with UD was higher than that of patients with BD. The time of facial expression recognition in the normal control group was shorter than that in the patient group. Using happiness-sadness to distinguish unipolar and BDs, the area under the ROC curve (AUC) is 0.933, the specificity is 0.889, and the sensitivity is 0.667. Using happiness-anger to distinguish unipolar and BD, the AUC was 0.733, the specificity was 0.778, and the sensitivity was 0.600.ConclusionPatients with UD had lower performance in recognizing negative expressions and had longer recognition times. Those with BD had lower accuracy in recognizing positive expressions and longer recognition times. Rapid facial expression recognition performance may be as a potential endophenotype for early identification of unipolar and BD.

2020 ◽  
Vol 7 (9) ◽  
pp. 190699
Author(s):  
Sarah A. H. Alharbi ◽  
Katherine Button ◽  
Lingshan Zhang ◽  
Kieran J. O'Shea ◽  
Vanessa Fasolt ◽  
...  

Evidence that affective factors (e.g. anxiety, depression, affect) are significantly related to individual differences in emotion recognition is mixed. Palermo et al . (Palermo et al . 2018 J. Exp. Psychol. Hum. Percept. Perform. 44 , 503–517) reported that individuals who scored lower in anxiety performed significantly better on two measures of facial-expression recognition (emotion-matching and emotion-labelling tasks), but not a third measure (the multimodal emotion recognition test). By contrast, facial-expression recognition was not significantly correlated with measures of depression, positive or negative affect, empathy, or autistic-like traits. Because the range of affective factors considered in this study and its use of multiple expression-recognition tasks mean that it is a relatively comprehensive investigation of the role of affective factors in facial expression recognition, we carried out a direct replication. In common with Palermo et al . (Palermo et al . 2018 J. Exp. Psychol. Hum. Percept. Perform. 44 , 503–517), scores on the DASS anxiety subscale negatively predicted performance on the emotion recognition tasks across multiple analyses, although these correlations were only consistently significant for performance on the emotion-labelling task. However, and by contrast with Palermo et al . (Palermo et al . 2018 J. Exp. Psychol. Hum. Percept. Perform. 44 , 503–517), other affective factors (e.g. those related to empathy) often also significantly predicted emotion-recognition performance. Collectively, these results support the proposal that affective factors predict individual differences in emotion recognition, but that these correlations are not necessarily specific to measures of general anxiety, such as the DASS anxiety subscale.


2019 ◽  
Vol 41 (2) ◽  
pp. 159-166 ◽  
Author(s):  
Ana Julia de Lima Bomfim ◽  
Rafaela Andreas dos Santos Ribeiro ◽  
Marcos Hortes Nisihara Chagas

Abstract Introduction The recognition of facial expressions of emotion is essential to living in society. However, individuals with major depression tend to interpret information considered imprecise in a negative light, which can exert a direct effect on their capacity to decode social stimuli. Objective To compare basic facial expression recognition skills during tasks with static and dynamic stimuli in older adults with and without major depression. Methods Older adults were selected through a screening process for psychiatric disorders at a primary care service. Psychiatric evaluations were performed using criteria from the Diagnostic and Statistical Manual of Mental Disorders, 5th edition (DSM-5). Twenty-three adults with a diagnosis of depression and 23 older adults without a psychiatric diagnosis were asked to perform two facial emotion recognition tasks using static and dynamic stimuli. Results Individuals with major depression demonstrated greater accuracy in recognizing sadness (p=0.023) and anger (p=0.024) during the task with static stimuli and less accuracy in recognizing happiness during the task with dynamic stimuli (p=0.020). The impairment was mainly related to the recognition of emotions of lower intensity. Conclusions The performance of older adults with depression in facial expression recognition tasks with static and dynamic stimuli differs from that of older adults without depression, with greater accuracy regarding negative emotions (sadness and anger) and lower accuracy regarding the recognition of happiness.


2021 ◽  
Vol 12 ◽  
Author(s):  
Ma Ruihua ◽  
Guo Hua ◽  
Zhao Meng ◽  
Chen Nan ◽  
Liu Panqi ◽  
...  

Objective: Considerable evidence has shown that facial expression recognition ability and cognitive function are impaired in patients with depression. We aimed to investigate the relationship between facial expression recognition and cognitive function in patients with depression.Methods: A total of 51 participants (i.e., 31 patients with depression and 20 healthy control subjects) underwent facial expression recognition tests, measuring anger, fear, disgust, sadness, happiness, and surprise. The Chinese version of the MATRICS Consensus Cognitive Battery (MCCB), which assesses seven cognitive domains, was used.Results: When compared with a control group, there were differences in the recognition of the expressions of sadness (p = 0.036), happiness (p = 0.041), and disgust (p = 0.030) in a depression group. In terms of cognitive function, the scores of patients with depression in the Trail Making Test (TMT; p < 0.001), symbol coding (p < 0.001), spatial span (p < 0.001), mazes (p = 0.007), the Brief Visuospatial Memory Test (BVMT; p = 0.001), category fluency (p = 0.029), and continuous performance test (p = 0.001) were lower than those of the control group, and the difference was statistically significant. The accuracy of sadness and disgust expression recognition in patients with depression was significantly positively correlated with cognitive function scores. The deficits in sadness expression recognition were significantly correlated with the TMT (p = 0.001, r = 0.561), symbol coding (p = 0.001, r = 0.596), maze (p = 0.015, r = 0.439), and the BVMT (p = 0.044, r = 0.370). The deficits in disgust expression recognition were significantly correlated with impairments in the TMT (p = 0.005, r = 0.501) and symbol coding (p = 0.001, r = 0.560).Conclusion: Since cognitive function is impaired in patients with depression, the ability to recognize negative facial expressions declines, which is mainly reflected in processing speed, reasoning, problem-solving, and memory.


2019 ◽  
Vol 2 (1) ◽  
Author(s):  
Catarina Iria ◽  
Rui Paixão ◽  
Fernando Barbosa

It is unknown if the ability of Portuguese in the identification of NimStim data set, which was created in America to provide facial expressions that could be recognized by untrained people, is (or not) similar to the Americans. To test this hypothesis the performance of Portuguese in the recognition of Happiness, Surprise, Sadness, Fear, Disgust and Anger NimStim facial expressions was compared with the Americans, but no significant differences were found. In both populations the easiest emotion to identify was Happiness while Fear was the most difficult one. However, with exception for Surprise, Portuguese tend to show a lower accuracy rate for all the emotions studied. Results highlighted some cultural differences.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Bin Jiang ◽  
Qiuwen Zhang ◽  
Zuhe Li ◽  
Qinggang Wu ◽  
Huanlong Zhang

AbstractMethods using salient facial patches (SFPs) play a significant role in research on facial expression recognition. However, most SFP methods use only frontal face images or videos for recognition, and they do not consider head position variations. We contend that SFP can be an effective approach for recognizing facial expressions under different head rotations. Accordingly, we propose an algorithm, called profile salient facial patches (PSFP), to achieve this objective. First, to detect facial landmarks and estimate head poses from profile face images, a tree-structured part model is used for pose-free landmark localization. Second, to obtain the salient facial patches from profile face images, the facial patches are selected using the detected facial landmarks while avoiding their overlap or the transcending of the actual face range. To analyze the PSFP recognition performance, three classical approaches for local feature extraction, specifically the histogram of oriented gradients (HOG), local binary pattern, and Gabor, were applied to extract profile facial expression features. Experimental results on the Radboud Faces Database show that PSFP with HOG features can achieve higher accuracies under most head rotations.


2020 ◽  
Author(s):  
Bin Jiang ◽  
Qiuwen Zhang ◽  
Zuhe Li ◽  
Qinggang Wu ◽  
Huanlong Zhang

Abstract Methods using salient facial patches (SFP) play a significant role in research on facial expression recognition. However, most SFP methods use only frontal face images or videos for recognition, and do not consider variations of head position. In our view, SFP can also be a good choice to recognize facial expression under different head rotations, and thus we propose an algorithm for this purpose, called Profile Salient Facial Patches (PSFP). First, in order to detect the facial landmarks from profile face images, the tree-structured part model is used for pose-free landmark localization; this approach excels at detecting facial landmarks and estimating head poses. Second, to obtain the salient facial patches from profile face images, the facial patches are selected using the detected facial landmarks, while avoiding overlap with each other or going beyond the range of the actual face. For the purpose of analyzing the recognition performance of PSFP, three classical approaches for local feature extraction-histogram of oriented Gradients (HOG), local binary pattern (LBP), and Gabor were applied to extract profile facial expression features. Experimental results on radboud faces database show that PSFP with HOG features can achieve higher accuracies under the most head rotations.


2020 ◽  
Vol 13 (4) ◽  
pp. 527-543
Author(s):  
Wenjuan Shen ◽  
Xiaoling Li

Purposerecent years, facial expression recognition has been widely used in human machine interaction, clinical medicine and safe driving. However, there is a limitation that conventional recurrent neural networks can only learn the time-series characteristics of expressions based on one-way propagation information.Design/methodology/approachTo solve such limitation, this paper proposes a novel model based on bidirectional gated recurrent unit networks (Bi-GRUs) with two-way propagations, and the theory of identity mapping residuals is adopted to effectively prevent the problem of gradient disappearance caused by the depth of the introduced network. Since the Inception-V3 network model for spatial feature extraction has too many parameters, it is prone to overfitting during training. This paper proposes a novel facial expression recognition model to add two reduction modules to reduce parameters, so as to obtain an Inception-W network with better generalization.FindingsFinally, the proposed model is pretrained to determine the best settings and selections. Then, the pretrained model is experimented on two facial expression data sets of CK+ and Oulu- CASIA, and the recognition performance and efficiency are compared with the existing methods. The highest recognition rate is 99.6%, which shows that the method has good recognition accuracy in a certain range.Originality/valueBy using the proposed model for the applications of facial expression, the high recognition accuracy and robust recognition results with lower time consumption will help to build more sophisticated applications in real world.


2018 ◽  
Vol 31 (2) ◽  
pp. e000014 ◽  
Author(s):  
Chengqing Yang ◽  
Ansi Qi ◽  
Huangfang Yu ◽  
Xiaofeng Guan ◽  
Jijun Wang ◽  
...  

BackgroundThe impairment of facial expression recognition has become a biomarker for early identification of first-episode schizophrenia, and this kind of research is increasing.AimsTo explore the differences in brain area activation using different degrees of disgusted facial expression recognition in antipsychotic-naïve patients with first-episode schizophrenia and healthy controls.MethodsIn this study, facial expression recognition tests were performed on 30 first-episode, antipsychotic-naïve patients with schizophrenia (treatment group) and 30 healthy subjects (control group) with matched age, educational attainment and gender. Functional MRI was used for comparing the differences of the brain areas of activation between the two groups.ResultsThe average response time difference between the patient group and the control group in the ‘high degree of disgust’ facial expression recognition task was statistically significant (1.359 (0.408)/2.193 (0.625), F=26.65, p<0.001), and the correct recognition rate of the treatment group was lower than that of the control group (41.05 (22.25)/59.84 (13.91, F=19.81, p<0.001). Compared with the control group, the left thalamus, right lingual gyrus and right middle temporal gyrus were negatively activated in the patients with first-episode schizophrenia in the ‘high degree of disgust’ emotion recognition, and there was a significant activation in the left and right middle temporal gyrus and the right caudate nucleus. However, there was no significant activation difference in the ‘low degree of disgust’ recognition.ConclusionsIn patients with first-episode schizophrenia, the areas of facial recognition impairment are significantly different in different degrees of disgust facial expression recognition.


Author(s):  
ZHENGYOU ZHANG

In this paper, we report our experiments on feature-based facial expression recognition within an architecture based on a two-layer perceptron. We investigate the use of two types of features extracted from face images: the geometric positions of a set of fiducial points on a face, and a set of multiscale and multiorientation Gabor wavelet coefficients at these points. They can be used either independently or jointly. The recognition performance with different types of features has been compared, which shows that Gabor wavelet coefficients are much more powerful than geometric positions. Furthermore, since the first layer of the perceptron actually performs a nonlinear reduction of the dimensionality of the feature space, we have also studied the desired number of hidden units, i.e. the appropriate dimension to represent a facial expression in order to achieve a good recognition rate. It turns out that five to seven hidden units are probably enough to represent the space of facial expressions. Then, we have investigated the importance of each individual fiducial point to facial expression recognition. Sensitivity analysis reveals that points on cheeks and on forehead carry little useful information. After discarding them, not only the computational efficiency increases, but also the generalization performance slightly improves. Finally, we have studied the significance of image scales. Experiments show that facial expression recognition is mainly a low frequency process, and a spatial resolution of 64 pixels × 64 pixels is probably enough.


Sign in / Sign up

Export Citation Format

Share Document