Reduced facial expression and social context in major depression: discrepancies between facial muscle activity and self-reported emotion

2000 ◽  
Vol 95 (2) ◽  
pp. 157-167 ◽  
Author(s):  
Jean-Guido Gehricke ◽  
David Shapiro
2020 ◽  
Vol 1 (4) ◽  
pp. 208-224
Author(s):  
Kornelia Gentsch ◽  
Ursula Beermann ◽  
Lingdan Wu ◽  
Stéphanie Trznadel ◽  
Klaus R. Scherer

AbstractAppraisal theories suggest that valence appraisal should be differentiated into micro-valences, such as intrinsic pleasantness and goal-/need-related appraisals. In contrast to a macro-valence approach, this dissociation explains, among other things, the emergence of mixed or blended emotions. Here, we extend earlier research that showed that these valence types can be empirically dissociated. We examine the timing and the response patterns of these two micro-valences via measuring facial muscle activity changes (electromyography, EMG) over the brow and the cheek regions. In addition, we explore the effects of the sensory stimulus modality (vision, audition, and olfaction) on these patterns. The two micro-valences were manipulated in a social judgment task: first, intrinsic un/pleasantness (IP) was manipulated by exposing participants to appropriate stimuli presented in different sensory domains followed by a goal conduciveness/obstruction (GC) manipulation consisting of feedback on participants’ judgments that were congruent or incongruent with their task-related goal. The results show significantly different EMG responses and timing patterns for both types of micro-valence, confirming the prediction that they are independent, consecutive parts of the appraisal process. Moreover, the lack of interaction effects with the sensory stimulus modality suggests high generalizability of the underlying appraisal mechanisms across different perception channels.


Author(s):  
Konstantin Frank ◽  
Nicholas Moellhoff ◽  
Antonia Kaiser ◽  
Michael Alfertshofer ◽  
Robert H. Gotkin ◽  
...  

AbstractThe evaluation of neuromodulator treatment outcomes can be performed by noninvasive surface-derived facial electromyography (fEMG) which can detect cumulative muscle fiber activity deep to the skin. The objective of the present study is to identify the most reliable facial locations where the motor unit action potentials (MUAPs) of various facial muscles can be quantified during fEMG measurements. The study population consisted of five males and seven females (31.0 [12.9] years, body mass index of 22.15 [1.6] kg/m2). Facial muscle activity was assessed in several facial regions in each patient for their respective muscle activity utilizing noninvasive surface-derived fEMG. Variables of interest were the average root mean square of three performed muscle contractions (= signal) (µV), mean root mean square between those contraction with the face in a relaxed facial expression (= baseline noise) (µV), and the signal to noise ratio (SNR). A total of 1,709 processed fEMG signals revealed one specific reliable location in each investigated region based on each muscle's anatomy, on the highest value of the SNR, on the lowest value for the baseline noise, and on the practicability to position the sensor while performing a facial expression. The results of this exploratory study may help guiding future researchers and practitioners in designing study protocols and measuring individual facial MUAP when utilizing fEMG. The locations presented herein were selected based on the measured parameters (SNR, signal, baseline noise) and on the practicability and reproducibility of sensor placement.


2021 ◽  
Vol 70 ◽  
pp. 1-10
Author(s):  
Sara Casaccia ◽  
Erik J. Sirevaag ◽  
Mark G. Frank ◽  
Joseph A. O'Sullivan ◽  
Lorenzo Scalise ◽  
...  

Author(s):  
Ahmad Hoirul Basori ◽  
Hani Moaiteq Abdullah AlJahdali

<p>The virtual human play vital roles in virtual reality and game. The process of Enriching the virtual human through their expression is one of the aspect that most researcher studied and improved. This study aims to demonstrate the combination of facial action units (FACS) and facial muscle to produce a realistic facial expression. The result of experiment succeed on producing particular expression such as anger, happy, sad which are able to convey the emotional state of the virtual human. This achievement is believed to bring full mental immersion towards virtual human and audience. The future works will able to generate a complex virtual human expression that combine physical factos such as wrinkle, fluid dynamics for tears or sweating.</p>


2019 ◽  
Vol 16 (6) ◽  
pp. 066029
Author(s):  
Gizem Yilmaz ◽  
Abdullah Salih Budan ◽  
Pekcan Ungan ◽  
Betilay Topkara ◽  
Kemal S Türker

2019 ◽  
Vol 41 (2) ◽  
pp. 159-166 ◽  
Author(s):  
Ana Julia de Lima Bomfim ◽  
Rafaela Andreas dos Santos Ribeiro ◽  
Marcos Hortes Nisihara Chagas

Abstract Introduction The recognition of facial expressions of emotion is essential to living in society. However, individuals with major depression tend to interpret information considered imprecise in a negative light, which can exert a direct effect on their capacity to decode social stimuli. Objective To compare basic facial expression recognition skills during tasks with static and dynamic stimuli in older adults with and without major depression. Methods Older adults were selected through a screening process for psychiatric disorders at a primary care service. Psychiatric evaluations were performed using criteria from the Diagnostic and Statistical Manual of Mental Disorders, 5th edition (DSM-5). Twenty-three adults with a diagnosis of depression and 23 older adults without a psychiatric diagnosis were asked to perform two facial emotion recognition tasks using static and dynamic stimuli. Results Individuals with major depression demonstrated greater accuracy in recognizing sadness (p=0.023) and anger (p=0.024) during the task with static stimuli and less accuracy in recognizing happiness during the task with dynamic stimuli (p=0.020). The impairment was mainly related to the recognition of emotions of lower intensity. Conclusions The performance of older adults with depression in facial expression recognition tasks with static and dynamic stimuli differs from that of older adults without depression, with greater accuracy regarding negative emotions (sadness and anger) and lower accuracy regarding the recognition of happiness.


2000 ◽  
Vol 7 (3) ◽  
pp. 156-168 ◽  
Author(s):  
Sheryl L. Reminger ◽  
Alfred W. Kaszniak ◽  
Patricia R. Dalby

Sign in / Sign up

Export Citation Format

Share Document