scholarly journals The cerebellum as a moderator of negative bias of facial expression processing in depressive patients

2021 ◽  
Author(s):  
Anna Nakamura ◽  
Yukihito Yomogida ◽  
Miho Ota ◽  
Junko Matsuo ◽  
Ikki ishida ◽  
...  

Background: Negative bias-a mood-congruent bias in emotion processing-is an important aspect of major depressive disorder (MDD), and such a bias in facial expression recognition has a significant effect on patients' social lives. Neuroscience research shows abnormal activity in emotion-processing systems regarding facial expressions in MDD. However, the neural basis of negative bias in facial expression processing has not been explored directly. Methods: Sixteen patients with MDD and twenty-three healthy controls (HC) who underwent an fMRI scan during an explicit facial emotion task with happy to sad faces were selected. We identified brain areas in which the MDD and HC groups showed different correlations between the behavioral negative bias scores and functional activities. Results: Behavioral data confirmed the existence of a higher negative bias in the MDD group. Regarding the relationship with neural activity, higher activity of happy faces in the posterior cerebellum was related to a higher negative bias in the MDD group, but lower negative bias in the HC group. Limitations: The sample size was small, and the possible effects of medication were not controlled for in this study. Conclusions: We confirmed a negative bias in the recognition of facial expressions in patients with MDD. fMRI data suggest the cerebellum as a moderator of facial emotion processing, which biases the recognition of facial expressions toward their own mood.

Author(s):  
Mahima Agrawal ◽  
Shubangi. D. Giripunje ◽  
P. R. Bajaj

This paper presents an efficient method of recognition of facial expressions in a video. The works proposes highly efficient facial expression recognition system using PCA optimized by Genetic Algorithm .Reduced computational time and comparable efficiency in terms of its ability to recognize correctly are the benchmarks of this work. Video sequences contain more information than still images hence are in the research subject now-a-days and have much more activities during the expression actions. We use PCA, a statistical method to reduce the dimensionality and are used to extract features with the help of covariance analysis to generate Eigen –components of the images. The Eigen-components as a feature input is optimized by Genetic algorithm to reduce the computation cost.


Webology ◽  
2020 ◽  
Vol 17 (2) ◽  
pp. 804-816
Author(s):  
Elaf J. Al Taee ◽  
Qasim Mohammed Jasim

A facial expression is a visual impression of a person's situations, emotions, cognitive activity, personality, intention and psychopathology, it has an active and vital role in the exchange of information and communication between people. In machines and robots which dedicated to communication with humans, the facial expressions recognition play an important and vital role in communication and reading of what is the person implies, especially in the field of health. For that the research in this field leads to development in communication with the robot. This topic has been discussed extensively, and with the progress of deep learning and use Convolution Neural Network CNN in image processing which widely proved efficiency, led to use CNN in the recognition of facial expressions. Automatic system for Facial Expression Recognition FER require to perform detection and location of faces in a cluttered scene, feature extraction, and classification. In this research, the CNN used for perform the process of FER. The target is to label each image of facial into one of the seven facial emotion categories considered in the JAFFE database. JAFFE facial expression database with seven facial expression labels as sad, happy, fear, surprise, anger, disgust, and natural are used in this research. We trained CNN with different depths using gray-scale images from the JAFFE database.The accuracy of proposed system was 100%.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Gilles Vannuscorps ◽  
Michael Andres ◽  
Alfonso Caramazza

What mechanisms underlie facial expression recognition? A popular hypothesis holds that efficient facial expression recognition cannot be achieved by visual analysis alone but additionally requires a mechanism of motor simulation — an unconscious, covert imitation of the observed facial postures and movements. Here, we first discuss why this hypothesis does not necessarily follow from extant empirical evidence. Next, we report experimental evidence against the central premise of this view: we demonstrate that individuals can achieve normotypical efficient facial expression recognition despite a congenital absence of relevant facial motor representations and, therefore, unaided by motor simulation. This underscores the need to reconsider the role of motor simulation in facial expression recognition.


2021 ◽  
Vol 8 (11) ◽  
Author(s):  
Shota Uono ◽  
Wataru Sato ◽  
Reiko Sawada ◽  
Sayaka Kawakami ◽  
Sayaka Yoshimura ◽  
...  

People with schizophrenia or subclinical schizotypal traits exhibit impaired recognition of facial expressions. However, it remains unclear whether the detection of emotional facial expressions is impaired in people with schizophrenia or high levels of schizotypy. The present study examined whether the detection of emotional facial expressions would be associated with schizotypy in a non-clinical population after controlling for the effects of IQ, age, and sex. Participants were asked to respond to whether all faces were the same as quickly and as accurately as possible following the presentation of angry or happy faces or their anti-expressions among crowds of neutral faces. Anti-expressions contain a degree of visual change that is equivalent to that of normal emotional facial expressions relative to neutral facial expressions and are recognized as neutral expressions. Normal expressions of anger and happiness were detected more rapidly and accurately than their anti-expressions. Additionally, the degree of overall schizotypy was negatively correlated with the effectiveness of detecting normal expressions versus anti-expressions. An emotion–recognition task revealed that the degree of positive schizotypy was negatively correlated with the accuracy of facial expression recognition. These results suggest that people with high levels of schizotypy experienced difficulties detecting and recognizing emotional facial expressions.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Junhuan Wang

Recognizing facial expressions accurately and effectively is of great significance to medical and other fields. Aiming at problem of low accuracy of face recognition in traditional methods, an improved facial expression recognition method is proposed. The proposed method conducts continuous confrontation training between the discriminator structure and the generator structure of the generative adversarial networks (GANs) to ensure enhanced extraction of image features of detected data set. Then, the high-accuracy recognition of facial expressions is realized. To reduce the amount of calculation, GAN generator is improved based on idea of residual network. The image is first reduced in dimension and then processed to ensure the high accuracy of the recognition method and improve real-time performance. Experimental part of the thesis uses JAFEE dataset, CK + dataset, and FER2013 dataset for simulation verification. The proposed recognition method shows obvious advantages in data sets of different sizes. The average recognition accuracy rates are 96.6%, 95.6%, and 72.8%, respectively. It proves that the method proposed has a generalization ability.


Author(s):  
Yanqiu Liang

To solve the problem of emotional loss in teaching and improve the teaching effect, an intelligent teaching method based on facial expression recognition was studied. The traditional active shape model (ASM) was improved to extract facial feature points. Facial expression was identified by using the geometric features of facial features and support vector machine (SVM). In the expression recognition process, facial geometry and SVM methods were used to generate expression classifiers. Results showed that the SVM method based on the geometric characteristics of facial feature points effectively realized the automatic recognition of facial expressions. Therefore, the automatic classification of facial expressions is realized, and the problem of emotional deficiency in intelligent teaching is effectively solved.


2018 ◽  
Vol 11 (4) ◽  
pp. 50-69 ◽  
Author(s):  
V.A. Barabanschikov ◽  
O.A. Korolkova ◽  
E.A. Lobodinskaya

We studied the perception of human facial emotional expressions during step-function stroboscopic presentation of changing mimics. Consecutive stages of each of the six basic facial expressions were pre sented to the participants: neutral face (300 ms) — expression of medium intensity (10—40 ms) — intense expression (30—120 ms) — expression of medium intensity (10—40 ms) — neutral face (100 ms). Alternative forced choice task was used to categorize the facial expressions. The results were compared to previous studies (Barabanschikov, Korolkova, Lobodinskaya, 2015; 2016), conducted using the same paradigm but with boxcar-function change of the expression: neutral face — intense expression — neutral face. We found that the dynamics of facial expression recognition, as well as errors and recognition time are almost identical in conditions of boxcar- and step-function presentation. One factor influencing the recognition rate is the proportion of presentation time of static (neutral) and changing (facial expression) aspects of the stimulus. In suboptimal conditions of facial expression perception (minimal presentation time of 10+30+10 ms and reduced intensity of expressions) we revealed stroboscopic sensibilization — a previously described phenomenon of enhanced recognition rate of low-attractive expressions (disgust, sadness, fear and anger), which has been previously found in conditions of boxcar-function presentation of expressions. We confirmed the similarity of influence of real and apparent motion on the recognition of basic facial emotional expressions.


2020 ◽  
pp. 103-140
Author(s):  
Yakov A. Bondarenko ◽  
Galina Ya. Menshikova

Background. The study explores two main processes of perception of facial expression: analytical (perception based on individual facial features) and holistic (holistic and non-additive perception of all features). The relative contribution of each process to facial expression recognition is still an open question. Objective. To identify the role of holistic and analytical mechanisms in the process of facial expression recognition. Methods. A method was developed and tested for studying analytical and holistic processes in the task of evaluating subjective differences of expressions, using composite and inverted facial images. A distinctive feature of the work is the use of a multidimensional scaling method, by which a judgment of the contribution of holistic and analytical processes to the perception of facial expressions is based on the analysis of the subjective space of the similarity of expressions obtained when presenting upright and inverted faces. Results. It was shown, first, that when perceiving upright faces, a characteristic clustering of expressions is observed in the subjective space of similarities of expression, which we interpret as a predominance of holistic processes; second, by inversion of the face, there is a change in the spatial configuration of expressions that may reflect a strengthening of analytical processes; in general, the method of multidimensional scaling has proven its effectiveness in solving the problem of the relation between holistic and analytical processes in recognition of facial expressions. Conclusion. The analysis of subjective spaces of the similarity of emotional faces is productive for the study of the ratio of analytical and holistic processes in the recognition of facial expressions.


2010 ◽  
Vol 197 (2) ◽  
pp. 156-157 ◽  
Author(s):  
Katie M. Douglas ◽  
Richard J. Porter

SummaryFacial emotion processing was examined in patients with severe depression (n = 68) and a healthy control group (n = 50), using the Facial Expression Recognition Task. A negative interpretation bias was observed in the depression group: neutral faces were more likely to be interpreted as sad and less likely to be interpreted as happy, compared with controls. The depression group also displayed a specific deficit in the recognition of facial expressions of disgust, compared with controls. This may relate to impaired functioning of frontostriatal structures, particularly the basal ganglia.


2012 ◽  
Vol 2012 ◽  
pp. 1-7 ◽  
Author(s):  
Lingdan Wu ◽  
Jie Pu ◽  
John J. B. Allen ◽  
Paul Pauli

Previous studies consistently reported abnormal recognition of facial expressions in depression. However, it is still not clear whether this abnormality is due to an enhanced or impaired ability to recognize facial expressions, and what underlying cognitive systems are involved. The present study aimed to examine how individuals with elevated levels of depressive symptoms differ from controls on facial expression recognition and to assess attention and information processing using eye tracking. Forty participants (18 with elevated depressive symptoms) were instructed to label facial expressions depicting one of seven emotions. Results showed that the high-depression group, in comparison with the low-depression group, recognized facial expressions faster and with comparable accuracy. Furthermore, the high-depression group demonstrated greater leftwards attention bias which has been argued to be an indicator of hyperactivation of right hemisphere during facial expression recognition.


Sign in / Sign up

Export Citation Format

Share Document