Social Anxiety and Interpretation Bias

2016 ◽  
Vol 119 (2) ◽  
pp. 539-556 ◽  
Author(s):  
Xiaoling Wang ◽  
Mingyi Qian ◽  
Hongyu Yu ◽  
Yang Sun ◽  
Songwei Li ◽  
...  

This study examined how positive-scale assessment of ambiguous social stimuli affects interpretation bias in social anxiety. Participants with high and low social anxiety ( N = 60) performed a facial expression discrimination task to assess interpretation bias. Participants were then randomly assigned to assess the emotion of briefly presented faces either on a negative or on a positive scale. They subsequently repeated the facial expression discrimination task. Participants with high versus low social anxiety made more negative interpretations of ambiguous facial expressions. However, those in the positive-scale assessment condition subsequently showed reduced negative interpretations of ambiguous facial expressions. These results suggest that interpretation bias in social anxiety could be mediated by positive priming rather than an outright negative bias.

2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Simon Faghel-Soubeyrand ◽  
Tania Lecomte ◽  
M. Archibaldo Bravo ◽  
Martin Lepage ◽  
Stéphane Potvin ◽  
...  

Abstract Deficits in social functioning are especially severe amongst schizophrenia individuals with the prevalent comorbidity of social anxiety disorder (SZ&SAD). Yet, the mechanisms underlying the recognition of facial expression of emotions—a hallmark of social cognition—are practically unexplored in SZ&SAD. Here, we aim to reveal the visual representations SZ&SAD (n = 16) and controls (n = 14) rely on for facial expression recognition. We ran a total of 30,000 trials of a facial expression categorization task with Bubbles, a data-driven technique. Results showed that SZ&SAD’s ability to categorize facial expression was impared compared to controls. More severe negative symptoms (flat affect, apathy, reduced social drive) was associated with more impaired emotion recognition ability, and with more biases in attributing neutral affect to faces. Higher social anxiety symptoms, on the other hand, was found to enhance the reaction speed to neutral and angry faces. Most importantly, Bubbles showed that these abnormalities could be explained by inefficient visual representations of emotions: compared to controls, SZ&SAD subjects relied less on fine facial cues (high spatial frequencies) and more on coarse facial cues (low spatial frequencies). SZ&SAD participants also never relied on the eye regions (only on the mouth) to categorize facial expressions. We discuss how possible interactions between early (low sensitivity to coarse information) and late stages of the visual system (overreliance on these coarse features) might disrupt SZ&SAD’s recognition of facial expressions. Our findings offer perceptual mechanisms through which comorbid SZ&SAD impairs crucial aspects of social cognition, as well as functional psychopathology.


Perception ◽  
2021 ◽  
pp. 030100662110002
Author(s):  
Jade Kinchella ◽  
Kun Guo

We often show an invariant or comparable recognition performance for perceiving prototypical facial expressions, such as happiness and anger, under different viewing settings. However, it is unclear to what extent the categorisation of ambiguous expressions and associated interpretation bias are invariant in degraded viewing conditions. In this exploratory eye-tracking study, we systematically manipulated both facial expression ambiguity (via morphing happy and angry expressions in different proportions) and face image clarity/quality (via manipulating image resolution) to measure participants’ expression categorisation performance, perceived expression intensity, and associated face-viewing gaze distribution. Our analysis revealed that increasing facial expression ambiguity and decreasing face image quality induced the opposite direction of expression interpretation bias (negativity vs. positivity bias, or increased anger vs. increased happiness categorisation), the same direction of deterioration impact on rating expression intensity, and qualitatively different influence on face-viewing gaze allocation (decreased gaze at eyes but increased gaze at mouth vs. stronger central fixation bias). These novel findings suggest that in comparison with prototypical facial expressions, our visual system has less perceptual tolerance in processing ambiguous expressions which are subject to viewing condition-dependent interpretation bias.


2021 ◽  
Author(s):  
Anna Nakamura ◽  
Yukihito Yomogida ◽  
Miho Ota ◽  
Junko Matsuo ◽  
Ikki ishida ◽  
...  

Background: Negative bias-a mood-congruent bias in emotion processing-is an important aspect of major depressive disorder (MDD), and such a bias in facial expression recognition has a significant effect on patients' social lives. Neuroscience research shows abnormal activity in emotion-processing systems regarding facial expressions in MDD. However, the neural basis of negative bias in facial expression processing has not been explored directly. Methods: Sixteen patients with MDD and twenty-three healthy controls (HC) who underwent an fMRI scan during an explicit facial emotion task with happy to sad faces were selected. We identified brain areas in which the MDD and HC groups showed different correlations between the behavioral negative bias scores and functional activities. Results: Behavioral data confirmed the existence of a higher negative bias in the MDD group. Regarding the relationship with neural activity, higher activity of happy faces in the posterior cerebellum was related to a higher negative bias in the MDD group, but lower negative bias in the HC group. Limitations: The sample size was small, and the possible effects of medication were not controlled for in this study. Conclusions: We confirmed a negative bias in the recognition of facial expressions in patients with MDD. fMRI data suggest the cerebellum as a moderator of facial emotion processing, which biases the recognition of facial expressions toward their own mood.


2020 ◽  
Vol 63 (10) ◽  
pp. 3349-3363
Author(s):  
Naomi H. Rodgers ◽  
Jennifer Y. F. Lau ◽  
Patricia M. Zebrowski

Purpose The purpose of this study was to examine group and individual differences in attentional bias toward and away from socially threatening facial stimuli among adolescents who stutter and age- and sex-matched typically fluent controls. Method Participants included 86 adolescents (43 stuttering, 43 controls) ranging in age from 13 to 19 years. They completed a computerized dot-probe task, which was modified to allow for separate measurement of attentional engagement with and attentional disengagement from facial stimuli (angry, fearful, neutral expressions). Their response time on this task was the dependent variable. Participants also completed the Social Anxiety Scale for Adolescents (SAS-A) and provided a speech sample for analysis of stuttering-like behaviors. Results The adolescents who stutter were more likely to engage quickly with threatening faces than to maintain attention on neutral faces, and they were also more likely to disengage quickly from threatening faces than to maintain attention on those faces. The typically fluent controls did not show any attentional preference for the threatening faces over the neutral faces in either the engagement or disengagement conditions. The two groups demonstrated equivalent levels of social anxiety that were both, on average, very close to the clinical cutoff score for high social anxiety, although degree of social anxiety did not influence performance in either condition. Stuttering severity did not influence performance among the adolescents who stutter. Conclusion This study provides preliminary evidence for a vigilance–avoidance pattern of attentional allocation to threatening social stimuli among adolescents who stutter.


2020 ◽  
Author(s):  
Jonathan Yi ◽  
Philip Pärnamets ◽  
Andreas Olsson

Responding appropriately to others’ facial expressions is key to successful social functioning. Despite the large body of work on face perception and spontaneous responses to static faces, little is known about responses to faces in dynamic, naturalistic situations, and no study has investigated how goal directed responses to faces are influenced by learning during dyadic interactions. To experimentally model such situations, we developed a novel method based on online integration of electromyography (EMG) signals from the participants’ face (corrugator supercilii and zygomaticus major) during facial expression exchange with dynamic faces displaying happy and angry facial expressions. Fifty-eight participants learned by trial-and-error to avoid receiving aversive stimulation by either reciprocate (congruently) or respond opposite (incongruently) to the expression of the target face. Our results validated our method, showing that participants learned to optimize their facial behavior, and replicated earlier findings of faster and more accurate responses in congruent vs. incongruent conditions. Moreover, participants performed better on trials when confronted with smiling, as compared to frowning, faces, suggesting it might be easier to adapt facial responses to positively associated expressions. Finally, we applied drift diffusion and reinforcement learning models to provide a mechanistic explanation for our findings which helped clarifying the underlying decision-making processes of our experimental manipulation. Our results introduce a new method to study learning and decision-making in facial expression exchange, in which there is a need to gradually adapt facial expression selection to both social and non-social reinforcements.


2020 ◽  
Author(s):  
Joshua W Maxwell ◽  
Eric Ruthruff ◽  
michael joseph

Are facial expressions of emotion processed automatically? Some authors have not found this to be the case (Tomasik et al., 2009). Here we revisited the question with a novel experimental logic – the backward correspondence effect (BCE). In three dual-task studies, participants first categorized a sound (Task 1) and then indicated the location of a target face (Task 2). In Experiment 1, Task 2 required participants to search for one facial expression of emotion (angry or happy). We observed positive BCEs, indicating that facial expressions of emotion bypassed the central attentional bottleneck and thus were processed in a capacity-free, automatic manner. In Experiment 2, we replicated this effect but found that morphed emotional expressions (which were used by Tomasik) were not processed automatically. In Experiment 3, we observed similar BCEs for another type of face processing previously shown to be capacity-free – identification of familiar faces (Jung et al., 2013). We conclude that facial expressions of emotion are identified automatically when sufficiently unambiguous.


2021 ◽  
pp. 174702182199299
Author(s):  
Mohamad El Haj ◽  
Emin Altintas ◽  
Ahmed A Moustafa ◽  
Abdel Halim Boudoukha

Future thinking, which is the ability to project oneself forward in time to pre-experience an event, is intimately associated with emotions. We investigated whether emotional future thinking can activate emotional facial expressions. We invited 43 participants to imagine future scenarios, cued by the words “happy,” “sad,” and “city.” Future thinking was video recorded and analysed with a facial analysis software to classify whether facial expressions (i.e., happy, sad, angry, surprised, scared, disgusted, and neutral facial expression) of participants were neutral or emotional. Analysis demonstrated higher levels of happy facial expressions during future thinking cued by the word “happy” than “sad” or “city.” In contrast, higher levels of sad facial expressions were observed during future thinking cued by the word “sad” than “happy” or “city.” Higher levels of neutral facial expressions were observed during future thinking cued by the word “city” than “happy” or “sad.” In the three conditions, the neutral facial expressions were high compared with happy and sad facial expressions. Together, emotional future thinking, at least for future scenarios cued by “happy” and “sad,” seems to trigger the corresponding facial expression. Our study provides an original physiological window into the subjective emotional experience during future thinking.


2021 ◽  
Vol 11 (4) ◽  
pp. 1428
Author(s):  
Haopeng Wu ◽  
Zhiying Lu ◽  
Jianfeng Zhang ◽  
Xin Li ◽  
Mingyue Zhao ◽  
...  

This paper addresses the problem of Facial Expression Recognition (FER), focusing on unobvious facial movements. Traditional methods often cause overfitting problems or incomplete information due to insufficient data and manual selection of features. Instead, our proposed network, which is called the Multi-features Cooperative Deep Convolutional Network (MC-DCN), maintains focus on the overall feature of the face and the trend of key parts. The processing of video data is the first stage. The method of ensemble of regression trees (ERT) is used to obtain the overall contour of the face. Then, the attention model is used to pick up the parts of face that are more susceptible to expressions. Under the combined effect of these two methods, the image which can be called a local feature map is obtained. After that, the video data are sent to MC-DCN, containing parallel sub-networks. While the overall spatiotemporal characteristics of facial expressions are obtained through the sequence of images, the selection of keys parts can better learn the changes in facial expressions brought about by subtle facial movements. By combining local features and global features, the proposed method can acquire more information, leading to better performance. The experimental results show that MC-DCN can achieve recognition rates of 95%, 78.6% and 78.3% on the three datasets SAVEE, MMI, and edited GEMEP, respectively.


Sign in / Sign up

Export Citation Format

Share Document