scholarly journals Mask wearing increases eye involvement during smiling: a facial EMG study

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Shuntaro Okazaki ◽  
Haruna Yamanami ◽  
Fumika Nakagawa ◽  
Nozomi Takuwa ◽  
Keith James Kawabata Duncan

AbstractThe use of face masks has become ubiquitous. Although mask wearing is a convenient way to reduce the spread of disease, it is important to know how the mask affects our communication via facial expression. For example, when we are wearing the mask and meet a friend, are our facial expressions different compared to when we are not? We investigated the effect of face mask wearing on facial expression, including the area around the eyes. We measured surface electromyography from zygomaticus major, orbicularis oculi, and depressor anguli oris muscles, when people smiled and talked with or without a mask. Only the actions of the orbicularis oculi were facilitated by wearing the mask. We thus concluded that mask wearing may increase the recruitment of the eyes during smiling. In other words, we can express joy and happiness even when wearing a face mask.

2021 ◽  
Author(s):  
Shuntaro Okazaki ◽  
Haruna Yamanami ◽  
Fumika Nakagawa ◽  
Nozomi Takuwa ◽  
Keith James Duncan Kawabata

Abstract The use of face masks has become ubiquitous. Although mask wearing is a convenient way to reduce the spread of disease, it is important to know how the mask affects our communication via facial expression. For example, when we are wearing the mask and meet a friend, are our facial expressions different compared to when we are not? We investigated the effect of face mask wearing on facial expression, including the area around the eyes. We measured surface electromyography from zygomaticus major, orbicularis oculi, and depressor anguli oris, when people smiled and talked with or without the mask. We found that only orbicularis oculi were facilitated by wearing the mask. We thus concluded that mask wearing increases the use of eye smiling as a form of communication. In other words, we can express joy and happiness even when wearing the mask using eye smiling.


2014 ◽  
Vol 4 (1) ◽  
pp. 95-105 ◽  
Author(s):  
J. Zraqou ◽  
W. Alkhadour ◽  
A. Al-Nu'aimi

Enabling computer systems to track and recognize facial expressions and then infer emotions from about real time video is a challenging research topic. In this work, a real time approach to emotion recognition through facial expression in live video is introduced. Several automatic methods for face localization, facial feature tracker, and facial expression recognition are employed. A robust tracking is achieved by using a face mask to resolve mismatches that could be generated during the tracking process. Action units (AUs) are then built to recognize the facial expression in each frame. The main objective of this work is to provide a prediction ability of a human behavior such as a crime, angry or for being nervous.


PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0260871
Author(s):  
Matthias Franz ◽  
Tobias Müller ◽  
Sina Hahn ◽  
Daniel Lundqvist ◽  
Dirk Rampoldt ◽  
...  

The immediate detection and correct processing of affective facial expressions are one of the most important competences in social interaction and thus a main subject in emotion and affect research. Generally, studies in these research domains, use pictures of adults who display affective facial expressions as experimental stimuli. However, for studies investigating developmental psychology and attachment behaviour it is necessary to use age-matched stimuli, where it is children that display affective expressions. PSYCAFE represents a newly developed picture-set of children’s faces. It includes reference portraits of girls and boys aged 4 to 6 years averaged digitally from different individual pictures, that were categorized to six basic affects (fear, disgust, happiness, sadness, anger and surprise) plus a neutral facial expression by cluster analysis. This procedure led to deindividualized and affect prototypical portraits. Individual affect expressive portraits of adults from an already validated picture-set (KDEF) were used in a similar way to create affect prototypical images also of adults. The stimulus set has been validated on human observers and entail emotion recognition accuracy rates and scores for intensity, authenticity and likeability ratings of the specific affect displayed. Moreover, the stimuli have also been characterized by the iMotions Facial Expression Analysis Module, providing additional data on probability values representing the likelihood that the stimuli depict the expected affect. Finally, the validation data from human observers and iMotions are compared to data on facial mimicry of healthy adults in response to these portraits, measured by facial EMG (m. zygomaticus major and m. corrugator supercilii).


2015 ◽  
Vol 3 (1) ◽  
Author(s):  
Friska G. Batoteng ◽  
Taufiq F. Pasiak ◽  
Shane H. R. Ticoalu

Abstract: Facial expression recognition is one way to recognize emotions which has not received much attention. Muscles that form facial expressions known as musculli facial, muscles that move the face and form human facial expressions: happy, sad, angry, fearful, disgusted and surprised which are the six basic expressions of human emotion. Human facial expressions can be measured using FACS (Facial Action Coding System). This study aims to determine the facial muscles which most frequently used and most rarely used, and determine the emotion expression of Jokowi, a presidential candidate, through assessment of the facial muscles using FACS. This study is a retrospective descriptive study. The research samples are the whole photo of Jokowi’s facial expression at first presidential debate in 2014, about 30 photos. Samples were taken from a video debate and confirmed to be a photo using Jokowi’s facial expressions which then further analyzed using FACS. The research showed that the most used action units and facial muscle is AU 1 whose work on frontal muscle pars medialis (14.75%). The least appear muscles on Jokowi’s facial expressions were musculus orbicularis oculi, pars palpebralis and AU 24 musculus obicularis oris (0.82%). The dominant facial expressions was seen in Jokowi was sad facial expression (36.67%).Keywords: musculi facialis, facial expression, expression of emotion, FACSAbstrak: Pengenalan ekspresi wajah adalah salah satu cara untuk mengenali emosi yang belum banyak diperhatikan. Otot-otot yang membentuk ekspresi wajah yaitu musculli facialis yang merupakan otot-otot penggerak wajah dan membentuk ekspresi – ekspresi wajah manusia yaitu bahagia, sedih, marah, takut, jijik dan terkejut yang merupakan 6 dasar ekspresi emosi manusia. Ekspresi wajah manusia dapat diukur dengan menggunakan parameter FACS (Facial Action Coding System). Penelitian ini bertujuan untuk mengetahui musculi facialis yang paling sering digunakan dan yang paling jarang digunakan, serta untuk menentukan ekspresi emosi calon presiden Jokowi. Desain penelitian ini yaitu penelitian deskriptif dengan retrospektif. Sampel penelitian ialah seluruh foto ekspresi wajah Jokowi saat debat calon presiden pertama tahun 2014 sebanyak 30 foto. Sampel diambil dari video debat dan dikonfirmasi menjadi foto kemudian dianalisis lebih lanjut menggunakan FACS. Penelitian ini didapatkan hasil bahwa Musculi yang paling banyak digerakkan, yaitu Musculi frontalis pars medialis (14,75%). Musculi yang paling sedikit muncul pada ekspresi wajah Jokowi yaitu musculus orbicularis oculi, pars palpebralis dan musculus obicularis oris (0,82%). Ekspresi wajah yang dominan dinampakkan oleh Jokowi merupakan ekspresi wajah sedih (36,67%).Kata kunci: musculi facialis, ekspresi wajah, ekspresi emosi, FACS


2018 ◽  
Author(s):  
Michael Carl Philipp ◽  
Michael Bernstein ◽  
Eric John Vanman ◽  
Lucy Johnston

Reciprocating others’ smiles is important for maintaining social connections as it both signals affiliative to others and also elicits affiliative reactions from others. Feelings of social exclusion may increase affiliative mimicry to improve affiliative bonds with others. In this study we examined whether social exclusion leads people to selectively mimic the facial expressions of more affiliative-looking smiles. Participants (N=48) first wrote about either a time they were excluded or a neutral event. They then classified a series of 20 smiling faces–half spontaneous enjoyment smiles and half posed smiles. Facial electromyography recorded muscle activity involved in smiling. Excluded participants distinguished the two smile types better than controls. Excluded participants also showed greater zygomaticus major (mouth smiling) activity toward enjoyment smiles compared to posed smiled; control participants did not. Orbicularis oculi (eye crinkle) activity matched that of the smile type viewed, but did not vary by exclusion condition. Affiliative social regulation is discussed as a possible explanation for these effects.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Jari K. Hietanen ◽  
Anneli Kylliäinen ◽  
Mikko J. Peltola

Abstract We tested if facial reactions to another person’s facial expressions depend on the self-relevance of the observed expressions. In the present study (n = 44), we measured facial electromyographic (zygomatic and corrugator) activity and autonomic arousal (skin conductance) responses to a live model’s smiling and neutral faces. In one condition, the participant and the model were able to see each other normally, whereas in the other condition, the participant was led to believe that the model could not see the participant. The results showed that the increment of zygomatic activity in response to smiling faces versus neutral faces was greater when the participants believed they were being watched than it was when the participants believed they were not being watched. However, zygomatic responses to smiles did not differ between the conditions, while the results suggested that the participants’ zygomatic responses to neutral faces seemed to attenuate in the condition of believing they were being watched. Autonomic responses to smiling faces were greater in the belief of being watched than in the belief of not being watched condition. The results suggest that the self-relevance of another individual’s facial expression modulates autonomic arousal responses and to a lesser extent facial EMG responses.


PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0262344
Author(s):  
Maria Tsantani ◽  
Vita Podgajecka ◽  
Katie L. H. Gray ◽  
Richard Cook

The use of surgical-type face masks has become increasingly common during the COVID-19 pandemic. Recent findings suggest that it is harder to categorise the facial expressions of masked faces, than of unmasked faces. To date, studies of the effects of mask-wearing on emotion recognition have used categorisation paradigms: authors have presented facial expression stimuli and examined participants’ ability to attach the correct label (e.g., happiness, disgust). While the ability to categorise particular expressions is important, this approach overlooks the fact that expression intensity is also informative during social interaction. For example, when predicting an interactant’s future behaviour, it is useful to know whether they are slightly fearful or terrified, contented or very happy, slightly annoyed or angry. Moreover, because categorisation paradigms force observers to pick a single label to describe their percept, any additional dimensionality within observers’ interpretation is lost. In the present study, we adopted a complementary emotion-intensity rating paradigm to study the effects of mask-wearing on expression interpretation. In an online experiment with 120 participants (82 female), we investigated how the presence of face masks affects the perceived emotional profile of prototypical expressions of happiness, sadness, anger, fear, disgust, and surprise. For each of these facial expressions, we measured the perceived intensity of all six emotions. We found that the perceived intensity of intended emotions (i.e., the emotion that the actor intended to convey) was reduced by the presence of a mask for all expressions except for anger. Additionally, when viewing all expressions except surprise, masks increased the perceived intensity of non-intended emotions (i.e., emotions that the actor did not intend to convey). Intensity ratings were unaffected by presentation duration (500ms vs 3000ms), or attitudes towards mask wearing. These findings shed light on the ambiguity that arises when interpreting the facial expressions of masked faces.


2004 ◽  
Vol 36 (05) ◽  
Author(s):  
K Wolf ◽  
R Mass ◽  
F Kiefer ◽  
K Eckert ◽  
N Weinhold ◽  
...  

2020 ◽  
Author(s):  
Jonathan Yi ◽  
Philip Pärnamets ◽  
Andreas Olsson

Responding appropriately to others’ facial expressions is key to successful social functioning. Despite the large body of work on face perception and spontaneous responses to static faces, little is known about responses to faces in dynamic, naturalistic situations, and no study has investigated how goal directed responses to faces are influenced by learning during dyadic interactions. To experimentally model such situations, we developed a novel method based on online integration of electromyography (EMG) signals from the participants’ face (corrugator supercilii and zygomaticus major) during facial expression exchange with dynamic faces displaying happy and angry facial expressions. Fifty-eight participants learned by trial-and-error to avoid receiving aversive stimulation by either reciprocate (congruently) or respond opposite (incongruently) to the expression of the target face. Our results validated our method, showing that participants learned to optimize their facial behavior, and replicated earlier findings of faster and more accurate responses in congruent vs. incongruent conditions. Moreover, participants performed better on trials when confronted with smiling, as compared to frowning, faces, suggesting it might be easier to adapt facial responses to positively associated expressions. Finally, we applied drift diffusion and reinforcement learning models to provide a mechanistic explanation for our findings which helped clarifying the underlying decision-making processes of our experimental manipulation. Our results introduce a new method to study learning and decision-making in facial expression exchange, in which there is a need to gradually adapt facial expression selection to both social and non-social reinforcements.


2020 ◽  
Author(s):  
Joshua W Maxwell ◽  
Eric Ruthruff ◽  
michael joseph

Are facial expressions of emotion processed automatically? Some authors have not found this to be the case (Tomasik et al., 2009). Here we revisited the question with a novel experimental logic – the backward correspondence effect (BCE). In three dual-task studies, participants first categorized a sound (Task 1) and then indicated the location of a target face (Task 2). In Experiment 1, Task 2 required participants to search for one facial expression of emotion (angry or happy). We observed positive BCEs, indicating that facial expressions of emotion bypassed the central attentional bottleneck and thus were processed in a capacity-free, automatic manner. In Experiment 2, we replicated this effect but found that morphed emotional expressions (which were used by Tomasik) were not processed automatically. In Experiment 3, we observed similar BCEs for another type of face processing previously shown to be capacity-free – identification of familiar faces (Jung et al., 2013). We conclude that facial expressions of emotion are identified automatically when sufficiently unambiguous.


Sign in / Sign up

Export Citation Format

Share Document