scholarly journals How does the presence of a surgical face mask impair the perceived intensity of facial emotions?

PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0262344
Author(s):  
Maria Tsantani ◽  
Vita Podgajecka ◽  
Katie L. H. Gray ◽  
Richard Cook

The use of surgical-type face masks has become increasingly common during the COVID-19 pandemic. Recent findings suggest that it is harder to categorise the facial expressions of masked faces, than of unmasked faces. To date, studies of the effects of mask-wearing on emotion recognition have used categorisation paradigms: authors have presented facial expression stimuli and examined participants’ ability to attach the correct label (e.g., happiness, disgust). While the ability to categorise particular expressions is important, this approach overlooks the fact that expression intensity is also informative during social interaction. For example, when predicting an interactant’s future behaviour, it is useful to know whether they are slightly fearful or terrified, contented or very happy, slightly annoyed or angry. Moreover, because categorisation paradigms force observers to pick a single label to describe their percept, any additional dimensionality within observers’ interpretation is lost. In the present study, we adopted a complementary emotion-intensity rating paradigm to study the effects of mask-wearing on expression interpretation. In an online experiment with 120 participants (82 female), we investigated how the presence of face masks affects the perceived emotional profile of prototypical expressions of happiness, sadness, anger, fear, disgust, and surprise. For each of these facial expressions, we measured the perceived intensity of all six emotions. We found that the perceived intensity of intended emotions (i.e., the emotion that the actor intended to convey) was reduced by the presence of a mask for all expressions except for anger. Additionally, when viewing all expressions except surprise, masks increased the perceived intensity of non-intended emotions (i.e., emotions that the actor did not intend to convey). Intensity ratings were unaffected by presentation duration (500ms vs 3000ms), or attitudes towards mask wearing. These findings shed light on the ambiguity that arises when interpreting the facial expressions of masked faces.

2019 ◽  
Vol 9 (16) ◽  
pp. 3379
Author(s):  
Hyun-Jun Hyung ◽  
Han Ul Yoon ◽  
Dongwoon Choi ◽  
Duk-Yeon Lee ◽  
Dong-Wook Lee

Because the internal structure, degree of freedom, skin control position and range of the android face are different, it is very difficult to generate facial expressions by applying existing facial expression generation methods. In addition, facial expressions differ among robots because they are designed subjectively. To address these problems, we developed a system that can automatically generate robot facial expressions by combining an android, a recognizer capable of classifying facial expressions and a genetic algorithm. We have developed two types (older men and young women) of android face robots that can simulate human skin movements. We selected 16 control positions to generate the facial expressions of these robots. The expressions were generated by combining the displacements of 16 motors. A chromosome comprising 16 genes (motor displacements) was generated by applying real-coded genetic algorithms; subsequently, it was used to generate robot facial expressions. To determine the fitness of the generated facial expressions, expression intensity was evaluated through a facial expression recognizer. The proposed system was used to generate six facial expressions (angry, disgust, fear, happy, sad, surprised); the results confirmed that they were more appropriate than manually generated facial expressions.


2014 ◽  
Vol 4 (1) ◽  
pp. 95-105 ◽  
Author(s):  
J. Zraqou ◽  
W. Alkhadour ◽  
A. Al-Nu'aimi

Enabling computer systems to track and recognize facial expressions and then infer emotions from about real time video is a challenging research topic. In this work, a real time approach to emotion recognition through facial expression in live video is introduced. Several automatic methods for face localization, facial feature tracker, and facial expression recognition are employed. A robust tracking is achieved by using a face mask to resolve mismatches that could be generated during the tracking process. Action units (AUs) are then built to recognize the facial expression in each frame. The main objective of this work is to provide a prediction ability of a human behavior such as a crime, angry or for being nervous.


2021 ◽  
Author(s):  
CAYOL Zoé ◽  
Tatjana Nazir

The Facial Expression Intensity Test (FExIT) measures the level of perceived intensity ofemotional cues in a given facial expression. The test consists of a series of faces takenfrom the NimStim set (Tottenham et al., 2009) whose expressions vary from a neutralexpression to one of the six basic emotions, with ten levels of morphing. The FExIT isvalidated by means of an emotion-related ERP component (i.e., the early posteriornegativity, EPN), which shows a systematic modulation of its amplitude with the level ofexpression intensity. The participant’s task in this test is to identify the expressedemotion among 8 options (i.e. the six basic emotions, a “neutral” and a “I don't know”option). The task is not timed. The score of the FExIT is either the proportion of correctlyidentified emotions, or the proportion of the attribution of an emotion to the facialstimulus (i.e. the attribution of any emotion but not “neutral” or “I don’t know”). Giventhat the facial expression intensity varies continuously from low to high, the FExIT allowsthe determination and comparison of threshold levels for correct responses. The freelyaccessible set of the 700 facial stimuli for the test is divided into two equivalent face lists,which further allows for pretest/posttest experimental designs. The test takesapproximately 25 min to complete and is simple to administer. The FExIT is thus a usefulinstrument for testing different experimental settings and populations.


2020 ◽  
Vol 8 (2) ◽  
pp. 68-84
Author(s):  
Naoki Imamura ◽  
Hiroki Nomiya ◽  
Teruhisa Hochin

Facial expression intensity has been proposed to digitize the degree of facial expressions in order to retrieve impressive scenes from lifelog videos. The intensity is calculated based on the correlation of facial features compared to each facial expression. However, the correlation is not determined objectively. It should be determined statistically based on the contribution score of the facial features necessary for expression recognition. Therefore, the proposed method recognizes facial expressions by using a neural network and calculates the contribution score of input toward the output. First, the authors improve some facial features. After that, they verify the score correctly by comparing the accuracy transitions depending on reducing useful and useless features and process the score statistically. As a result, they extract useful facial features from the neural network.


Perception ◽  
2021 ◽  
pp. 030100662110002
Author(s):  
Jade Kinchella ◽  
Kun Guo

We often show an invariant or comparable recognition performance for perceiving prototypical facial expressions, such as happiness and anger, under different viewing settings. However, it is unclear to what extent the categorisation of ambiguous expressions and associated interpretation bias are invariant in degraded viewing conditions. In this exploratory eye-tracking study, we systematically manipulated both facial expression ambiguity (via morphing happy and angry expressions in different proportions) and face image clarity/quality (via manipulating image resolution) to measure participants’ expression categorisation performance, perceived expression intensity, and associated face-viewing gaze distribution. Our analysis revealed that increasing facial expression ambiguity and decreasing face image quality induced the opposite direction of expression interpretation bias (negativity vs. positivity bias, or increased anger vs. increased happiness categorisation), the same direction of deterioration impact on rating expression intensity, and qualitatively different influence on face-viewing gaze allocation (decreased gaze at eyes but increased gaze at mouth vs. stronger central fixation bias). These novel findings suggest that in comparison with prototypical facial expressions, our visual system has less perceptual tolerance in processing ambiguous expressions which are subject to viewing condition-dependent interpretation bias.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Shuntaro Okazaki ◽  
Haruna Yamanami ◽  
Fumika Nakagawa ◽  
Nozomi Takuwa ◽  
Keith James Kawabata Duncan

AbstractThe use of face masks has become ubiquitous. Although mask wearing is a convenient way to reduce the spread of disease, it is important to know how the mask affects our communication via facial expression. For example, when we are wearing the mask and meet a friend, are our facial expressions different compared to when we are not? We investigated the effect of face mask wearing on facial expression, including the area around the eyes. We measured surface electromyography from zygomaticus major, orbicularis oculi, and depressor anguli oris muscles, when people smiled and talked with or without a mask. Only the actions of the orbicularis oculi were facilitated by wearing the mask. We thus concluded that mask wearing may increase the recruitment of the eyes during smiling. In other words, we can express joy and happiness even when wearing a face mask.


2021 ◽  
Author(s):  
Shuntaro Okazaki ◽  
Haruna Yamanami ◽  
Fumika Nakagawa ◽  
Nozomi Takuwa ◽  
Keith James Duncan Kawabata

Abstract The use of face masks has become ubiquitous. Although mask wearing is a convenient way to reduce the spread of disease, it is important to know how the mask affects our communication via facial expression. For example, when we are wearing the mask and meet a friend, are our facial expressions different compared to when we are not? We investigated the effect of face mask wearing on facial expression, including the area around the eyes. We measured surface electromyography from zygomaticus major, orbicularis oculi, and depressor anguli oris, when people smiled and talked with or without the mask. We found that only orbicularis oculi were facilitated by wearing the mask. We thus concluded that mask wearing increases the use of eye smiling as a form of communication. In other words, we can express joy and happiness even when wearing the mask using eye smiling.


2016 ◽  
Vol 37 (1) ◽  
pp. 16-23 ◽  
Author(s):  
Chit Yuen Yi ◽  
Matthew W. E. Murry ◽  
Amy L. Gentzler

Abstract. Past research suggests that transient mood influences the perception of facial expressions of emotion, but relatively little is known about how trait-level emotionality (i.e., temperament) may influence emotion perception or interact with mood in this process. Consequently, we extended earlier work by examining how temperamental dimensions of negative emotionality and extraversion were associated with the perception accuracy and perceived intensity of three basic emotions and how the trait-level temperamental effect interacted with state-level self-reported mood in a sample of 88 adults (27 men, 18–51 years of age). The results indicated that higher levels of negative mood were associated with higher perception accuracy of angry and sad facial expressions, and higher levels of perceived intensity of anger. For perceived intensity of sadness, negative mood was associated with lower levels of perceived intensity, whereas negative emotionality was associated with higher levels of perceived intensity of sadness. Overall, our findings added to the limited literature on adult temperament and emotion perception.


2020 ◽  
Author(s):  
Jonathan Yi ◽  
Philip Pärnamets ◽  
Andreas Olsson

Responding appropriately to others’ facial expressions is key to successful social functioning. Despite the large body of work on face perception and spontaneous responses to static faces, little is known about responses to faces in dynamic, naturalistic situations, and no study has investigated how goal directed responses to faces are influenced by learning during dyadic interactions. To experimentally model such situations, we developed a novel method based on online integration of electromyography (EMG) signals from the participants’ face (corrugator supercilii and zygomaticus major) during facial expression exchange with dynamic faces displaying happy and angry facial expressions. Fifty-eight participants learned by trial-and-error to avoid receiving aversive stimulation by either reciprocate (congruently) or respond opposite (incongruently) to the expression of the target face. Our results validated our method, showing that participants learned to optimize their facial behavior, and replicated earlier findings of faster and more accurate responses in congruent vs. incongruent conditions. Moreover, participants performed better on trials when confronted with smiling, as compared to frowning, faces, suggesting it might be easier to adapt facial responses to positively associated expressions. Finally, we applied drift diffusion and reinforcement learning models to provide a mechanistic explanation for our findings which helped clarifying the underlying decision-making processes of our experimental manipulation. Our results introduce a new method to study learning and decision-making in facial expression exchange, in which there is a need to gradually adapt facial expression selection to both social and non-social reinforcements.


2020 ◽  
Author(s):  
Joshua W Maxwell ◽  
Eric Ruthruff ◽  
michael joseph

Are facial expressions of emotion processed automatically? Some authors have not found this to be the case (Tomasik et al., 2009). Here we revisited the question with a novel experimental logic – the backward correspondence effect (BCE). In three dual-task studies, participants first categorized a sound (Task 1) and then indicated the location of a target face (Task 2). In Experiment 1, Task 2 required participants to search for one facial expression of emotion (angry or happy). We observed positive BCEs, indicating that facial expressions of emotion bypassed the central attentional bottleneck and thus were processed in a capacity-free, automatic manner. In Experiment 2, we replicated this effect but found that morphed emotional expressions (which were used by Tomasik) were not processed automatically. In Experiment 3, we observed similar BCEs for another type of face processing previously shown to be capacity-free – identification of familiar faces (Jung et al., 2013). We conclude that facial expressions of emotion are identified automatically when sufficiently unambiguous.


Sign in / Sign up

Export Citation Format

Share Document