scholarly journals Chronic negative mood affects internal representations of negative facial expressions – An internet study

Author(s):  
Jacob Jolij ◽  
Sophia C. Wriedt ◽  
Annika Luckmann

Facial expressions are an important source of information in social interactions, as they effectively communicate someone’s emotional state. Not surprisingly, the human visual system is highly specialized in processing facial expressions. Interestingly, processing of facial expressions is influenced by the emotional state of the observer: in a negative mood, observers are more sensitive to negative emotional expression than when they are in a positive mood, and vice versa. Here, we investigated the effects of chronic negative mood on perception of facial expressions by means of an online reverse correlation paradigm. We administered a depression questionnaire assessing chronic negative mood over the last two weeks. We constructed a classification image for negative emotion for each participant by means of an online reverse correlation task, which were rated for intensity of expression by an independent group of observers. Here we found a strong correlation between chronic mood and intensity of expression of the internal representation: the more negative chronic mood, the less intense the negative expression of the internal representation. This experiment corroborates earlier findings that the perception of facial expression is affected by an observer’s mood, and that this effect may be the result of altered top-down internal representations of facial expression. Equally importantly, though, our results demonstrate the feasibility of applying a reverse correlation paradigm via the Internet, opening up the possibility for large-sample studies using this technique.

2014 ◽  
Author(s):  
Jacob Jolij ◽  
Sophia C. Wriedt ◽  
Annika Luckmann

Facial expressions are an important source of information in social interactions, as they effectively communicate someone’s emotional state. Not surprisingly, the human visual system is highly specialized in processing facial expressions. Interestingly, processing of facial expressions is influenced by the emotional state of the observer: in a negative mood, observers are more sensitive to negative emotional expression than when they are in a positive mood, and vice versa. Here, we investigated the effects of chronic negative mood on perception of facial expressions by means of an online reverse correlation paradigm. We administered a depression questionnaire assessing chronic negative mood over the last two weeks. We constructed a classification image for negative emotion for each participant by means of an online reverse correlation task, which were rated for intensity of expression by an independent group of observers. Here we found a strong correlation between chronic mood and intensity of expression of the internal representation: the more negative chronic mood, the less intense the negative expression of the internal representation. This experiment corroborates earlier findings that the perception of facial expression is affected by an observer’s mood, and that this effect may be the result of altered top-down internal representations of facial expression. Equally importantly, though, our results demonstrate the feasibility of applying a reverse correlation paradigm via the Internet, opening up the possibility for large-sample studies using this technique.


2020 ◽  
Vol 11 ◽  
Author(s):  
Chika Nanayama Tanaka ◽  
Hayato Higa ◽  
Noriko Ogawa ◽  
Minenori Ishido ◽  
Tomohiro Nakamura ◽  
...  

An assessment of mood or emotion is important in developing mental health measures, and facial expressions are strongly related to mood or emotion. This study thus aimed to examine the relationship between levels of negative mood and characteristics of mouth parts when moods are drawn as facial expressions on a common platform. A cross-sectional study of Japanese college freshmen was conducted, and 1,068 valid responses were analyzed. The questionnaire survey consisted of participants’ characteristics, the Profile of Mood States (POMS), and a sheet of facial expression drawing (FACED), and the sheet was digitized and analyzed using an image-analysis software. Based on the total POMS score as an index of negative mood, the participants were divided into four groups: low (L), normal (N), high (H), and very high (VH). Lengths of drawn lines and between both mouth corners were significantly longer, and circularity and roundness were significantly higher in the L group. With increasing levels of negative mood, significant decreasing trends were observed in these lengths. Convex downward and enclosed figures were significantly predominant in the L group, while convex upward figures were significantly predominant and a tendency toward predominance of no drawn mouths or line figures was found in the H and VH groups. Our results suggest that mood states can be significantly related to the size and figure characteristics of drawn mouths of FACED on a non-verbal common platform. That is, these findings mean that subjects with low negative mood may draw a greater and rounder mouth and figures that may be enclosed and downward convex, while subjects with a high negative mood may not draw the line, or if any, may draw the line shorter and upward convex.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2166
Author(s):  
Geesung Oh ◽  
Junghwan Ryu ◽  
Euiseok Jeong ◽  
Ji Hyun Yang ◽  
Sungwook Hwang ◽  
...  

In intelligent vehicles, it is essential to monitor the driver’s condition; however, recognizing the driver’s emotional state is one of the most challenging and important tasks. Most previous studies focused on facial expression recognition to monitor the driver’s emotional state. However, while driving, many factors are preventing the drivers from revealing the emotions on their faces. To address this problem, we propose a deep learning-based driver’s real emotion recognizer (DRER), which is a deep learning-based algorithm to recognize the drivers’ real emotions that cannot be completely identified based on their facial expressions. The proposed algorithm comprises of two models: (i) facial expression recognition model, which refers to the state-of-the-art convolutional neural network structure; and (ii) sensor fusion emotion recognition model, which fuses the recognized state of facial expressions with electrodermal activity, a bio-physiological signal representing electrical characteristics of the skin, in recognizing even the driver’s real emotional state. Hence, we categorized the driver’s emotion and conducted human-in-the-loop experiments to acquire the data. Experimental results show that the proposed fusing approach achieves 114% increase in accuracy compared to using only the facial expressions and 146% increase in accuracy compare to using only the electrodermal activity. In conclusion, our proposed method achieves 86.8% recognition accuracy in recognizing the driver’s induced emotion while driving situation.


2019 ◽  
Vol 12 (1) ◽  
pp. 27-39
Author(s):  
D.V. Lucin ◽  
Y.A. Kozhukhova ◽  
E.A. Suchkova

Emotion congruence in emotion perception is manifested in increasing sensitivity to the emotions corresponding to the perceiver’s emotional state. In this study, an experimental procedure that robustly generates emotion congruence during the perception of ambiguous facial expressions has been developed. It was hypothesized that emotion congruence will be stronger in the early stages of perception. In two experiments, happiness and sadness were elicited in 69 (mean age 20.2, 57 females) and 58 (mean age 18.2, 50 females) participants. Then they determined what emotions were present in the ambiguous faces. The duration of stimulus presentation varied for the analysis of earlier and later stages of perception. The effect of emotion congruence was obtained in both experiments: happy participants perceived more happiness and less sadness in ambiguous facial expression compared to sad participants. Stimulus duration did not influence emotion congruence. Further studies should focus on the juxtaposition of the models connecting the emotion congruence mechanisms either with perception or with response generation.


2020 ◽  
Vol 10 (8) ◽  
pp. 2956 ◽  
Author(s):  
Chang-Min Kim ◽  
Ellen J. Hong ◽  
Kyungyong Chung ◽  
Roy C. Park

As people communicate with each other, they use gestures and facial expressions as a means to convey and understand emotional state. Non-verbal means of communication are essential to understanding, based on external clues to a person’s emotional state. Recently, active studies have been conducted on the lifecare service of analyzing users’ facial expressions. Yet, rather than a service necessary for everyday life, the service is currently provided only for health care centers or certain medical institutions. It is necessary to conduct studies to prevent accidents that suddenly occur in everyday life and to cope with emergencies. Thus, we propose facial expression analysis using line-segment feature analysis-convolutional recurrent neural network (LFA-CRNN) feature extraction for health-risk assessments of drivers. The purpose of such an analysis is to manage and monitor patients with chronic diseases who are rapidly increasing in number. To prevent automobile accidents and to respond to emergency situations due to acute diseases, we propose a service that monitors a driver’s facial expressions to assess health risks and alert the driver to risk-related matters while driving. To identify health risks, deep learning technology is used to recognize expressions of pain and to determine if a person is in pain while driving. Since the amount of input-image data is large, analyzing facial expressions accurately is difficult for a process with limited resources while providing the service on a real-time basis. Accordingly, a line-segment feature analysis algorithm is proposed to reduce the amount of data, and the LFA-CRNN model was designed for this purpose. Through this model, the severity of a driver’s pain is classified into one of nine types. The LFA-CRNN model consists of one convolution layer that is reshaped and delivered into two bidirectional gated recurrent unit layers. Finally, biometric data are classified through softmax. In addition, to evaluate the performance of LFA-CRNN, the performance was compared through the CRNN and AlexNet Models based on the University of Northern British Columbia and McMaster University (UNBC-McMaster) database.


2021 ◽  
Vol 12 ◽  
Author(s):  
Fan Mo ◽  
Jingjin Gu ◽  
Ke Zhao ◽  
Xiaolan Fu

Facial expression recognition plays a crucial role in understanding the emotion of people, as well as in social interaction. Patients with major depressive disorder (MDD) have been repeatedly reported to be impaired in recognizing facial expressions. This study aimed to investigate the confusion effects between two facial expressions that presented different emotions and to compare the difference of confusion effect for each emotion pair between patients with MDD and healthy controls. Participants were asked to judge the emotion category of each facial expression in a two-alternative forced choice paradigm. Six basic emotions (i.e., happiness, fear, sadness, anger, surprise, and disgust) were examined in pairs, resulting in 15 emotion combinations. Results showed that patients with MDD were impaired in the recognition of all basic facial expressions except for the happy expression. Moreover, patients with MDD were more inclined to confuse a negative emotion (i.e., anger and disgust) with another emotion as compared to healthy controls. These findings highlight the importance that patients with MDD show a deficit of sensitivity in distinguishing specific two facial expressions.


2000 ◽  
Vol 23 (2) ◽  
pp. 211-212 ◽  
Author(s):  
Simon C. Moore ◽  
Mike Oaksford

Rolls defines emotion as innate reward and punishment. This could not explain our results showing that people learn faster in a negative mood. We argue that what people know about their world affects their emotional state. Negative emotion signals a failure to predict negative reward and hence prompts learning to resolve the ignorance. Thus what you don't know affects how you feel.


2016 ◽  
Vol 37 (1) ◽  
pp. 16-23 ◽  
Author(s):  
Chit Yuen Yi ◽  
Matthew W. E. Murry ◽  
Amy L. Gentzler

Abstract. Past research suggests that transient mood influences the perception of facial expressions of emotion, but relatively little is known about how trait-level emotionality (i.e., temperament) may influence emotion perception or interact with mood in this process. Consequently, we extended earlier work by examining how temperamental dimensions of negative emotionality and extraversion were associated with the perception accuracy and perceived intensity of three basic emotions and how the trait-level temperamental effect interacted with state-level self-reported mood in a sample of 88 adults (27 men, 18–51 years of age). The results indicated that higher levels of negative mood were associated with higher perception accuracy of angry and sad facial expressions, and higher levels of perceived intensity of anger. For perceived intensity of sadness, negative mood was associated with lower levels of perceived intensity, whereas negative emotionality was associated with higher levels of perceived intensity of sadness. Overall, our findings added to the limited literature on adult temperament and emotion perception.


2020 ◽  
Author(s):  
Jonathan Yi ◽  
Philip Pärnamets ◽  
Andreas Olsson

Responding appropriately to others’ facial expressions is key to successful social functioning. Despite the large body of work on face perception and spontaneous responses to static faces, little is known about responses to faces in dynamic, naturalistic situations, and no study has investigated how goal directed responses to faces are influenced by learning during dyadic interactions. To experimentally model such situations, we developed a novel method based on online integration of electromyography (EMG) signals from the participants’ face (corrugator supercilii and zygomaticus major) during facial expression exchange with dynamic faces displaying happy and angry facial expressions. Fifty-eight participants learned by trial-and-error to avoid receiving aversive stimulation by either reciprocate (congruently) or respond opposite (incongruently) to the expression of the target face. Our results validated our method, showing that participants learned to optimize their facial behavior, and replicated earlier findings of faster and more accurate responses in congruent vs. incongruent conditions. Moreover, participants performed better on trials when confronted with smiling, as compared to frowning, faces, suggesting it might be easier to adapt facial responses to positively associated expressions. Finally, we applied drift diffusion and reinforcement learning models to provide a mechanistic explanation for our findings which helped clarifying the underlying decision-making processes of our experimental manipulation. Our results introduce a new method to study learning and decision-making in facial expression exchange, in which there is a need to gradually adapt facial expression selection to both social and non-social reinforcements.


2020 ◽  
Author(s):  
Joshua W Maxwell ◽  
Eric Ruthruff ◽  
michael joseph

Are facial expressions of emotion processed automatically? Some authors have not found this to be the case (Tomasik et al., 2009). Here we revisited the question with a novel experimental logic – the backward correspondence effect (BCE). In three dual-task studies, participants first categorized a sound (Task 1) and then indicated the location of a target face (Task 2). In Experiment 1, Task 2 required participants to search for one facial expression of emotion (angry or happy). We observed positive BCEs, indicating that facial expressions of emotion bypassed the central attentional bottleneck and thus were processed in a capacity-free, automatic manner. In Experiment 2, we replicated this effect but found that morphed emotional expressions (which were used by Tomasik) were not processed automatically. In Experiment 3, we observed similar BCEs for another type of face processing previously shown to be capacity-free – identification of familiar faces (Jung et al., 2013). We conclude that facial expressions of emotion are identified automatically when sufficiently unambiguous.


Sign in / Sign up

Export Citation Format

Share Document