scholarly journals Effects of Mask Use and Race on Face Perception, Emotion Recognition, and Social Distancing During the COVID-19 Pandemic

Author(s):  
Evrim Gulbetekin

Abstract This investigation used three experiments to test the effect of mask use and other-race effect (ORE) on face perception in three contexts: (a) face recognition, (b) recognition of facial expressions, and (c) social distance. The first, which involved a matching-to-sample paradigm, tested Caucasian subjects with either masked or unmasked faces using Caucasian and Asian samples. The participants exhibited the best performance in recognizing an unmasked face condition and the poorest when asked to recognize a masked face that they had seen earlier without a mask. Accuracy was also poorer for Asian faces than Caucasian faces. The second experiment presented Asian or Caucasian faces having different emotional expressions, with and without masks. The results for this task, which involved identifying which emotional expression the participants had seen on the presented face, indicated that emotion recognition performance decreased for faces portrayed with masks. The emotional expressions ranged from the most accurately to least accurately recognized as follows: happy, neutral, disgusted, and fearful. Emotion recognition performance was poorer for Asian stimuli compared to Caucasian. Experiment 3 used the same participants and stimuli and asked participants to indicate the social distance they would prefer to observe with each pictured person. The participants preferred a wider social distance with unmasked faces compared to masked faces. Social distance also varied by the portrayed emotion: ranging from farther to closer as follows: disgusted, fearful, neutral, and happy. Race was also a factor; participants preferred wider social distance for Asian compared to Caucasian faces. Altogether, our findings indicated that during the COVID-19 pandemic face perception and social distance were affected by mask use, ORE.

2020 ◽  
Author(s):  
Nazire Duran ◽  
ANTHONY P. ATKINSON

Certain facial features provide useful information for recognition of facial expressions. In two experiments, we investigated whether foveating informative features of briefly presented expressions improves recognition accuracy and whether these features are targeted reflexively when not foveated. Angry, fearful, surprised, and sad or disgusted expressions were presented briefly at locations which would ensure foveation of specific features. Foveating the mouth of fearful, surprised and disgusted expressions improved emotion recognition compared to foveating an eye or cheek or the central brow. Foveating the brow lead to equivocal results in anger recognition across the two experiments, which might be due to the different combination of emotions used. There was no consistent evidence suggesting that reflexive first saccades targeted emotion-relevant features; instead, they targeted the closest feature to initial fixation. In a third experiment, angry, fearful, surprised and disgusted expressions were presented for 5 seconds. Duration of task-related fixations in the eyes, brow, nose and mouth regions was modulated by the presented expression. Moreover, longer fixation at the mouth positively correlated with anger and disgust accuracy both when these expressions were freely viewed (Experiment 3) and when briefly presented at the mouth (Experiment 2). Finally, an overall preference to fixate the mouth across all expressions correlated positively with anger and disgust accuracy. These findings suggest that foveal processing of informative features is functional/contributory to emotion recognition, but they are not automatically sought out when not foveated, and that facial emotion recognition performance is related to idiosyncratic gaze behaviour.


PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0260814
Author(s):  
Nazire Duran ◽  
Anthony P. Atkinson

Certain facial features provide useful information for recognition of facial expressions. In two experiments, we investigated whether foveating informative features of briefly presented expressions improves recognition accuracy and whether these features are targeted reflexively when not foveated. Angry, fearful, surprised, and sad or disgusted expressions were presented briefly at locations which would ensure foveation of specific features. Foveating the mouth of fearful, surprised and disgusted expressions improved emotion recognition compared to foveating an eye or cheek or the central brow. Foveating the brow led to equivocal results in anger recognition across the two experiments, which might be due to the different combination of emotions used. There was no consistent evidence suggesting that reflexive first saccades targeted emotion-relevant features; instead, they targeted the closest feature to initial fixation. In a third experiment, angry, fearful, surprised and disgusted expressions were presented for 5 seconds. Duration of task-related fixations in the eyes, brow, nose and mouth regions was modulated by the presented expression. Moreover, longer fixation at the mouth positively correlated with anger and disgust accuracy both when these expressions were freely viewed (Experiment 2b) and when briefly presented at the mouth (Experiment 2a). Finally, an overall preference to fixate the mouth across all expressions correlated positively with anger and disgust accuracy. These findings suggest that foveal processing of informative features is functional/contributory to emotion recognition, but they are not automatically sought out when not foveated, and that facial emotion recognition performance is related to idiosyncratic gaze behaviour.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Hanna Drimalla ◽  
Irina Baskow ◽  
Behnoush Behnia ◽  
Stefan Roepke ◽  
Isabel Dziobek

Abstract Background Imitation of facial expressions plays an important role in social functioning. However, little is known about the quality of facial imitation in individuals with autism and its relationship with defining difficulties in emotion recognition. Methods We investigated imitation and recognition of facial expressions in 37 individuals with autism spectrum conditions and 43 neurotypical controls. Using a novel computer-based face analysis, we measured instructed imitation of facial emotional expressions and related it to emotion recognition abilities. Results Individuals with autism imitated facial expressions if instructed to do so, but their imitation was both slower and less precise than that of neurotypical individuals. In both groups, a more precise imitation scaled positively with participants’ accuracy of emotion recognition. Limitations Given the study’s focus on adults with autism without intellectual impairment, it is unclear whether the results generalize to children with autism or individuals with intellectual disability. Further, the new automated facial analysis, despite being less intrusive than electromyography, might be less sensitive. Conclusions Group differences in emotion recognition, imitation and their interrelationships highlight potential for treatment of social interaction problems in individuals with autism.


2007 ◽  
Vol 18 (1) ◽  
pp. 31-36 ◽  
Author(s):  
Roy P. C. Kessels ◽  
Lotte Gerritsen ◽  
Barbara Montagne ◽  
Nibal Ackl ◽  
Janine Diehl ◽  
...  

Behavioural problems are a key feature of frontotemporal lobar degeneration (FTLD). Also, FTLD patients show impairments in emotion processing. Specifically, the perception of negative emotional facial expressions is affected. Generally, however, negative emotional expressions are regarded as more difficult to recognize than positive ones, which thus may have been a confounding factor in previous studies. Also, ceiling effects are often present on emotion recognition tasks using full-blown emotional facial expressions. In the present study with FTLD patients, we examined the perception of sadness, anger, fear, happiness, surprise and disgust at different emotional intensities on morphed facial expressions to take task difficulty into account. Results showed that our FTLD patients were specifically impaired at the recognition of the emotion anger. Also, the patients performed worse than the controls on recognition of surprise, but performed at control levels on disgust, happiness, sadness and fear. These findings corroborate and extend previous results showing deficits in emotion perception in FTLD.


Perception ◽  
10.1068/p5067 ◽  
2003 ◽  
Vol 32 (7) ◽  
pp. 827-838 ◽  
Author(s):  
Bradley C Duchaine ◽  
Holly Parker ◽  
Ken Nakayama

In the leading model of face perception, facial identity and facial expressions of emotion are recognized by separate mechanisms. In this report, we provide evidence supporting the independence of these processes by documenting an individual with severely impaired recognition of facial identity yet normal recognition of facial expressions of emotion. NM, a 40-year-old prosopagnosic, showed severely impaired performance on five of six tests of facial identity recognition. In contrast, she performed in the normal range on four different tests of emotion recognition. Because the tests of identity recognition and emotion recognition assessed her abilities in a variety of ways, these results provide solid support for models in which identity recognition and emotion recognition are performed by separate processes.


2021 ◽  
pp. 003329412110184
Author(s):  
Paola Surcinelli ◽  
Federica Andrei ◽  
Ornella Montebarocci ◽  
Silvana Grandi

Aim of the research The literature on emotion recognition from facial expressions shows significant differences in recognition ability depending on the proposed stimulus. Indeed, affective information is not distributed uniformly in the face and recent studies showed the importance of the mouth and the eye regions for a correct recognition. However, previous studies used mainly facial expressions presented frontally and studies which used facial expressions in profile view used a between-subjects design or children faces as stimuli. The present research aims to investigate differences in emotion recognition between faces presented in frontal and in profile views by using a within subjects experimental design. Method The sample comprised 132 Italian university students (88 female, Mage = 24.27 years, SD = 5.89). Face stimuli displayed both frontally and in profile were selected from the KDEF set. Two emotion-specific recognition accuracy scores, viz., frontal and in profile, were computed from the average of correct responses for each emotional expression. In addition, viewing times and response times (RT) were registered. Results Frontally presented facial expressions of fear, anger, and sadness were significantly better recognized than facial expressions of the same emotions in profile while no differences were found in the recognition of the other emotions. Longer viewing times were also found when faces expressing fear and anger were presented in profile. In the present study, an impairment in recognition accuracy was observed only for those emotions which rely mostly on the eye regions.


2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Dina Tell ◽  
Denise Davidson ◽  
Linda A. Camras

Eye gaze direction and expression intensity effects on emotion recognition in children with autism disorder and typically developing children were investigated. Children with autism disorder and typically developing children identified happy and angry expressions equally well. Children with autism disorder, however, were less accurate in identifying fear expressions across intensities and eye gaze directions. Children with autism disorder rated expressions with direct eyes, and 50% expressions, as more intense than typically developing children. A trend was also found for sad expressions, as children with autism disorder were less accurate in recognizing sadness at 100% intensity with direct eyes than typically developing children. Although the present research showed that children with autism disorder are sensitive to eye gaze direction, impairments in the recognition of fear, and possibly sadness, exist. Furthermore, children with autism disorder and typically developing children perceive the intensity of emotional expressions differently.


2021 ◽  
Vol 12 ◽  
Author(s):  
Xiaoxiao Li

In the natural environment, facial and bodily expressions influence each other. Previous research has shown that bodily expressions significantly influence the perception of facial expressions. However, little is known about the cognitive processing of facial and bodily emotional expressions and its temporal characteristics. Therefore, this study presented facial and bodily expressions, both separately and together, to examine the electrophysiological mechanism of emotional recognition using event-related potential (ERP). Participants assessed the emotions of facial and bodily expressions that varied by valence (positive/negative) and consistency (matching/non-matching emotions). The results showed that bodily expressions induced a more positive P1 component and a shortened latency, whereas facial expressions triggered a more negative N170 and prolonged latency. Among N2 and P3, N2 was more sensitive to inconsistent emotional information and P3 was more sensitive to consistent emotional information. The cognitive processing of facial and bodily expressions had distinctive integrating features, with the interaction occurring in the early stage (N170). The results of the study highlight the importance of facial and bodily expressions in the cognitive processing of emotion recognition.


2011 ◽  
Vol 12 (1) ◽  
pp. 77-77
Author(s):  
Sharpley Hsieh ◽  
Olivier Piguet ◽  
John R. Hodges

AbstractIntroduction: Frontotemporal dementia (FTD) is a progressive neurode-generative brain disease characterised clinically by abnormalities in behaviour, cognition and language. Two subgroups, behavioural-variant FTD (bvFTD) and semantic dementia (SD), also show impaired emotion recognition particularly for negative emotions. This deficit has been demonstrated using visual stimuli such as facial expressions. Whether recognition of emotions conveyed through other modalities — for example, music — is also impaired has not been investigated. Methods: Patients with bvFTD, SD and Alzheimer's disease (AD), as well as healthy age-matched controls, labeled tunes according to the emotion conveyed (happy, sad, peaceful or scary). In addition, each tune was also rated along two orthogonal emotional dimensions: valence (pleasant/unpleasant) and arousal (stimulating/relaxing). Participants also undertook a facial emotion recognition test and other cognitive tests. Integrity of basic music detection (tone, tempo) was also examined. Results: Patient groups were matched for disease severity. Overall, patients did not differ from controls with regard to basic music processing or for the recognition of facial expressions. Ratings of valence and arousal were similar across groups. In contrast, SD patients were selectively impaired at recognising music conveying negative emotions (sad and scary). Patients with bvFTD did not differ from controls. Conclusion: Recognition of emotions in music appears to be selectively affected in some FTD subgroups more than others, a disturbance of emotion detection which appears to be modality specific. This finding suggests dissociation in the neural networks necessary for the processing of emotions depending on modality.


2002 ◽  
Vol 14 (2) ◽  
pp. 210-227 ◽  
Author(s):  
S. Campanella ◽  
P. Quinet ◽  
R. Bruyer ◽  
M. Crommelinck ◽  
J.-M. Guerit

Behavioral studies have shown that two different morphed faces perceived as reflecting the same emotional expression are harder to discriminate than two faces considered as two different ones. This advantage of between-categorical differences compared with within-categorical ones is classically referred as the categorical perception effect. The temporal course of this effect on fear and happiness facial expressions has been explored through event-related potentials (ERPs). Three kinds of pairs were presented in a delayed same–different matching task: (1) two different morphed faces perceived as the same emotional expression (within-categorical differences), (2) two other ones reflecting two different emotions (between-categorical differences), and (3) two identical morphed faces (same faces for methodological purpose). Following the second face onset in the pair, the amplitude of the bilateral occipito-temporal negativities (N170) and of the vertex positive potential (P150 or VPP) was reduced for within and same pairs relative to between pairs. This suggests a repetition priming effect. We also observed a modulation of the P3b wave, as the amplitude of the responses for the between pairs was higher than for the within and same pairs. These results indicate that the categorical perception of human facial emotional expressions has a perceptual origin in the bilateral occipito-temporal regions, while typical prior studies found emotion-modulated ERP components considerably later.


Sign in / Sign up

Export Citation Format

Share Document