Depressive and Elative Mood Inductions as a Function of Exaggerated versus Contradictory Facial Expressions

1989 ◽  
Vol 68 (2) ◽  
pp. 443-452 ◽  
Author(s):  
Patricia T. Riccelli ◽  
Carol E. Antila ◽  
J. Alexander Dale ◽  
Herbert L. Klions

Two studies concerned the relation between facial expression cognitive induction of mood and perception of mood in women undergraduates. In Exp. 1, 20 subjects were randomly assigned to a group who were instructed in exaggerated facial expressions (Demand Group) and 20 subjects were randomly assigned to a group who were not instructed (Nondemand Group). All subjects completed a modified Velten (1968) elation- and depression-induction sequence. Ratings of depression on the Multiple Affect Adjective Checklist increased during the depression condition and decreased during the elation condition. Subjects made more facial expressions in the Demand Group than the Nondemand Group from electromyogram measures of the zygomatic and corrugator muscles and from corresponding action unit measures from visual scoring using the Facial Action Scoring System. Subjects who were instructed in the Demand Group rated their depression as more severe during the depression slides than the other group. No such effect was noted during the elation condition. In Exp. 2, 16 women were randomly assigned to a group who were instructed in facial expressions contradictory to those expected on the depression and elation tasks (Contradictory Expression Group). Another 16 women were randomly assigned to a group who were given no instructions about facial expressions (Nondemand Group). All subjects completed the depression- and elation-induction sequence mentioned in Exp. 1. No differences were reported between groups on the ratings of depression (MAACL) for the depression-induction or for the elation-induction but both groups rated depression higher after the depression condition and lower after the elation condition. Electromyographic and facial action scores verified that subjects in the Contradictory Expression Group were making the requested contradictory facial expressions during the mood-induction sequences. It was concluded that the primary influence on emotion came from the cognitive mood-induction sequences. Facial expressions only seem to modify the emotion in the case of depression being exacerbated by frowning. A contradictory facial expression did not affect the rating of an emotion.

PLoS ONE ◽  
2021 ◽  
Vol 16 (6) ◽  
pp. e0253378
Author(s):  
Svenja Zempelin ◽  
Karolina Sejunaite ◽  
Claudia Lanza ◽  
Matthias W. Riepe

Film clips are established to induce or intensify mood states in young persons. Fewer studies address induction of mood states in old persons. Analysis of facial expression provides an opportunity to substantiate subjective mood states with a psychophysiological variable. We investigated healthy young (YA; n = 29; age 24.4 ± 2.3) and old (OA; n = 28; age 69.2 ± 7.4) participants. Subjects were exposed to film segments validated in young adults to induce four basic emotions (anger, disgust, happiness, sadness). We analyzed subjective mood states with a 7-step Likert scale and facial expressions with an automated system for analysis of facial expressions (FaceReader™ 7.0, Noldus Information Technology b.v.) for both the four target emotions as well as concomitant emotions. Mood expressivity was analysed with the Berkeley Expressivity Questionnaire (BEQ) and the Short Suggestibility Scale (SSS). Subjective mood intensified in all target emotions in the whole group and both YA and OA subgroups. Facial expressions of mood intensified in the whole group for all target emotions except sadness. Induction of happiness was associated with a decrease of sadness in both subjective and objective assessment. Induction of sadness was observed with subjective assessment and accompanied by a decrease of happiness in both subjective and objective assessment. Regression analysis demonstrated pre-exposure facial expressions and personality factors (BEQ, SSS) to be associated with the intensity of facial expression on mood induction. We conclude that mood induction is successful regardless of age. Analysis of facial expressions complement self-assessment of mood and may serve as a means of objectification of mood change. The concordance between self-assessment of mood change and facial expression is modulated by personality factors.


Author(s):  
Yongmian Zhang ◽  
Jixu Chen ◽  
Yan Tong ◽  
Qiang Ji

This chapter describes a probabilistic framework for faithful reproduction of spontaneous facial expressions on a synthetic face model in a real time interactive application. The framework consists of a coupled Bayesian network (BN) to unify the facial expression analysis and synthesis into one coherent structure. At the analysis end, we cast the facial action coding system (FACS) into a dynamic Bayesian network (DBN) to capture relationships between facial expressions and the facial motions as well as their uncertainties and dynamics. The observations fed into the DBN facial expression model are measurements of facial action units (AUs) generated by an AU model. Also implemented by a DBN, the AU model captures the rigid head movements and nonrigid facial muscular movements of a spontaneous facial expression. At the synthesizer, a static BN reconstructs the Facial Animation Parameters (FAPs) and their intensity through the top-down inference according to the current state of facial expression and pose information output by the analysis end. The two BNs are connected statically through a data stream link. The novelty of using the coupled BN brings about several benefits. First, a facial expression is inferred through both spatial and temporal inference so that the perceptual quality of animation is less affected by the misdetection of facial features. Second, more realistic looking facial expressions can be reproduced by modeling the dynamics of human expressions in facial expression analysis. Third, very low bitrate (9 bytes per frame) in data transmission can be achieved.


Author(s):  
Michel Valstar ◽  
Stefanos Zafeiriou ◽  
Maja Pantic

Automatic Facial Expression Analysis systems have come a long way since the earliest approaches in the early 1970s. We are now at a point where the first systems are commercially applied, most notably smile detectors included in digital cameras. As one of the most comprehensive and objective ways to describe facial expressions, the Facial Action Coding System (FACS) has received significant and sustained attention within the field. Over the past 30 years, psychologists and neuroscientists have conducted extensive research on various aspects of human behaviour using facial expression analysis coded in terms of FACS. Automating FACS coding would make this research faster and more widely applicable, opening up new avenues to understanding how we communicate through facial expressions. Mainly due to the cost effectiveness of existing recording equipment, until recently almost all work conducted in this area involves 2D imagery, despite their inherent problems relating to pose and illumination variations. In order to deal with these problems, 3D recordings are increasingly used in expression analysis research. In this chapter, the authors give an overview of 2D and 3D FACS recognition, and summarise current challenges and opportunities.


2012 ◽  
Vol 25 (1) ◽  
pp. 105-110 ◽  
Author(s):  
Yohko Maki ◽  
Hiroshi Yoshida ◽  
Tomoharu Yamaguchi ◽  
Haruyasu Yamaguchi

ABSTRACTBackground:Positivity recognition bias has been reported for facial expression as well as memory and visual stimuli in aged individuals, whereas emotional facial recognition in Alzheimer disease (AD) patients is controversial, with possible involvement of confounding factors such as deficits in spatial processing of non-emotional facial features and in verbal processing to express emotions. Thus, we examined whether recognition of positive facial expressions was preserved in AD patients, by adapting a new method that eliminated the influences of these confounding factors.Methods:Sensitivity of six basic facial expressions (happiness, sadness, surprise, anger, disgust, and fear) was evaluated in 12 outpatients with mild AD, 17 aged normal controls (ANC), and 25 young normal controls (YNC). To eliminate the factors related to non-emotional facial features, averaged faces were prepared as stimuli. To eliminate the factors related to verbal processing, the participants were required to match the images of stimulus and answer, avoiding the use of verbal labels.Results:In recognition of happiness, there was no difference in sensitivity between YNC and ANC, and between ANC and AD patients. AD patients were less sensitive than ANC in recognition of sadness, surprise, and anger. ANC were less sensitive than YNC in recognition of surprise, anger, and disgust. Within the AD patient group, sensitivity of happiness was significantly higher than those of the other five expressions.Conclusions:In AD patient, recognition of happiness was relatively preserved; recognition of happiness was most sensitive and was preserved against the influences of age and disease.


2020 ◽  
Author(s):  
Maurryce Starks ◽  
Anna Shafer-Skelton ◽  
Michela Paradiso ◽  
Aleix M. Martinez ◽  
Julie Golomb

The “spatial congruency bias” is a behavioral phenomenon where two objects presented sequentially are more likely to be judged as being the same object if they are presented in the same location (Golomb et al., 2014), suggesting that irrelevant spatial location information may be bound to object representations. Here, we examine whether the spatial congruency bias extends to higher-level object judgments of facial identity and expression. On each trial, two real-world faces were sequentially presented in variable screen locations, and subjects were asked to make same-different judgments on the facial expression (Experiments 1-2) or facial identity (Experiment 3) of the stimuli. We observed a robust spatial congruency bias for judgements of facial identity, yet a more fragile one for judgements of facial expression. Subjects were more likely to judge two faces as displaying the same expression if they were presented in the same location (compared to in different locations), but only when the faces shared the same identity. On the other hand, a spatial congruency bias was found when subjects made judgements on facial identity, even across faces displaying different facial expressions. These findings suggest a possible difference between the binding of facial identity and facial expression to spatial location.


1987 ◽  
Vol 65 (2) ◽  
pp. 495-502 ◽  
Author(s):  
Douglas E. Klions ◽  
Kim S. Sanders ◽  
Mary A. Hudak ◽  
J. Alexander Dale ◽  
Herbert L. Klions

College students of either androgynous or sex-typed orientation were randomly assigned to either an insoluble concept-formation task or a solvable one. Posttreatment scores were compared for measures of dysphoric mood (Multiple Affect Adjective List), electromyographic responses (corrugator and zygomatic), and discrete facial responses (Facial Action Coding System). In Study 1, 18 androgynous women were compared to 16 feminine women; in Study 2, 16 androgynous men were compared to 16 masculine men. The insoluble task was associated with more corrugator activity (frowning) than the solvable task in both studies. Feminine women displayed more corrugator responses across both tasks than androgynous women. However, masculine men did not differ from androgynous men in over-all corrugator response activity. Androgynous women smiled more than feminine women on the facial action coding measure. Men subjected to the insoluble task reported significantly more anxiety, depression, and hostility. Masculine men scored higher on anxiety during the insoluble task than androgynous men, while the latter scored somewhat higher on anxiety in the other condition.


Personal Computer sourced Face Recognition has been a sophisticated and well-found technique which is being rationally utilized for most of the authenticated cases. In reality, there is a number of situations where the expressions of the face will be different. We are here able to instinctively detect the five universal expressions: smile, sadness, anger, surprise, neutral by studying face geometry by determining which type of facial expression has been carried out. Using some facial data with variant expressions. We hereby made some experimentations to calculate the accuracies of some machine learning methods by making some changes in the face images such as a change in expressions, which at last needed for training and recognition identifiers. Our objective is to take the features of neutral facial expressions and add them with the other expressive face images like smiling, angry, sadness to improve the accuracy.


PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0261666
Author(s):  
Ryota Kobai ◽  
Hiroki Murakami

Self-focus is a type of cognitive processing that maintains negative emotions. Moreover, bodily feedback is also essential for maintaining emotions. This study investigated the effect of interactions between self-focused attention and facial expressions on emotions. The results indicated that control facial expression manipulation after self-focus reduced happiness scores. On the contrary, the smiling facial expression manipulation after self-focus increased happiness scores marginally. However, facial expressions did not affect positive emotions after the other-focus manipulation. These findings suggest that self-focus plays a pivotal role in facial expressions’ effect on positive emotions. However, self-focusing is insufficient for decreasing positive emotions, and the interaction between self-focus and facial expressions is crucial for developing positive emotions.


2015 ◽  
Vol 3 (1) ◽  
Author(s):  
Friska G. Batoteng ◽  
Taufiq F. Pasiak ◽  
Shane H. R. Ticoalu

Abstract: Facial expression recognition is one way to recognize emotions which has not received much attention. Muscles that form facial expressions known as musculli facial, muscles that move the face and form human facial expressions: happy, sad, angry, fearful, disgusted and surprised which are the six basic expressions of human emotion. Human facial expressions can be measured using FACS (Facial Action Coding System). This study aims to determine the facial muscles which most frequently used and most rarely used, and determine the emotion expression of Jokowi, a presidential candidate, through assessment of the facial muscles using FACS. This study is a retrospective descriptive study. The research samples are the whole photo of Jokowi’s facial expression at first presidential debate in 2014, about 30 photos. Samples were taken from a video debate and confirmed to be a photo using Jokowi’s facial expressions which then further analyzed using FACS. The research showed that the most used action units and facial muscle is AU 1 whose work on frontal muscle pars medialis (14.75%). The least appear muscles on Jokowi’s facial expressions were musculus orbicularis oculi, pars palpebralis and AU 24 musculus obicularis oris (0.82%). The dominant facial expressions was seen in Jokowi was sad facial expression (36.67%).Keywords: musculi facialis, facial expression, expression of emotion, FACSAbstrak: Pengenalan ekspresi wajah adalah salah satu cara untuk mengenali emosi yang belum banyak diperhatikan. Otot-otot yang membentuk ekspresi wajah yaitu musculli facialis yang merupakan otot-otot penggerak wajah dan membentuk ekspresi – ekspresi wajah manusia yaitu bahagia, sedih, marah, takut, jijik dan terkejut yang merupakan 6 dasar ekspresi emosi manusia. Ekspresi wajah manusia dapat diukur dengan menggunakan parameter FACS (Facial Action Coding System). Penelitian ini bertujuan untuk mengetahui musculi facialis yang paling sering digunakan dan yang paling jarang digunakan, serta untuk menentukan ekspresi emosi calon presiden Jokowi. Desain penelitian ini yaitu penelitian deskriptif dengan retrospektif. Sampel penelitian ialah seluruh foto ekspresi wajah Jokowi saat debat calon presiden pertama tahun 2014 sebanyak 30 foto. Sampel diambil dari video debat dan dikonfirmasi menjadi foto kemudian dianalisis lebih lanjut menggunakan FACS. Penelitian ini didapatkan hasil bahwa Musculi yang paling banyak digerakkan, yaitu Musculi frontalis pars medialis (14,75%). Musculi yang paling sedikit muncul pada ekspresi wajah Jokowi yaitu musculus orbicularis oculi, pars palpebralis dan musculus obicularis oris (0,82%). Ekspresi wajah yang dominan dinampakkan oleh Jokowi merupakan ekspresi wajah sedih (36,67%).Kata kunci: musculi facialis, ekspresi wajah, ekspresi emosi, FACS


2018 ◽  
Author(s):  
Nathaniel Haines ◽  
Matthew W. Southward ◽  
Jennifer S. Cheavens ◽  
Theodore Beauchaine ◽  
Woo-Young Ahn

AbstractFacial expressions are fundamental to interpersonal communication, including social interaction, and allow people of different ages, cultures, and languages to quickly and reliably convey emotional information. Historically, facial expression research has followed from discrete emotion theories, which posit a limited number of distinct affective states that are represented with specific patterns of facial action. Much less work has focused on dimensional features of emotion, particularly positive and negative affect intensity. This is likely, in part, because achieving inter-rater reliability for facial action and affect intensity ratings is painstaking and labor-intensive. We use computer-vision and machine learning (CVML) to identify patterns of facial actions in 4,648 video recordings of 125 human participants, which show strong correspondences to positive and negative affect intensity ratings obtained from highly trained coders. Our results show that CVML can both (1) determine the importance of different facial actions that human coders use to derive positive and negative affective ratings, and (2) efficiently automate positive and negative affect intensity coding on large facial expression databases. Further, we show that CVML can be applied to individual human judges to infer which facial actions they use to generate perceptual emotion ratings from facial expressions.


Sign in / Sign up

Export Citation Format

Share Document