scholarly journals Temporal Unfolding of Micro-valences in Facial Expression Evoked by Visual, Auditory, and Olfactory Stimuli

2020 ◽  
Vol 1 (4) ◽  
pp. 208-224
Author(s):  
Kornelia Gentsch ◽  
Ursula Beermann ◽  
Lingdan Wu ◽  
Stéphanie Trznadel ◽  
Klaus R. Scherer

AbstractAppraisal theories suggest that valence appraisal should be differentiated into micro-valences, such as intrinsic pleasantness and goal-/need-related appraisals. In contrast to a macro-valence approach, this dissociation explains, among other things, the emergence of mixed or blended emotions. Here, we extend earlier research that showed that these valence types can be empirically dissociated. We examine the timing and the response patterns of these two micro-valences via measuring facial muscle activity changes (electromyography, EMG) over the brow and the cheek regions. In addition, we explore the effects of the sensory stimulus modality (vision, audition, and olfaction) on these patterns. The two micro-valences were manipulated in a social judgment task: first, intrinsic un/pleasantness (IP) was manipulated by exposing participants to appropriate stimuli presented in different sensory domains followed by a goal conduciveness/obstruction (GC) manipulation consisting of feedback on participants’ judgments that were congruent or incongruent with their task-related goal. The results show significantly different EMG responses and timing patterns for both types of micro-valence, confirming the prediction that they are independent, consecutive parts of the appraisal process. Moreover, the lack of interaction effects with the sensory stimulus modality suggests high generalizability of the underlying appraisal mechanisms across different perception channels.

Author(s):  
Konstantin Frank ◽  
Nicholas Moellhoff ◽  
Antonia Kaiser ◽  
Michael Alfertshofer ◽  
Robert H. Gotkin ◽  
...  

AbstractThe evaluation of neuromodulator treatment outcomes can be performed by noninvasive surface-derived facial electromyography (fEMG) which can detect cumulative muscle fiber activity deep to the skin. The objective of the present study is to identify the most reliable facial locations where the motor unit action potentials (MUAPs) of various facial muscles can be quantified during fEMG measurements. The study population consisted of five males and seven females (31.0 [12.9] years, body mass index of 22.15 [1.6] kg/m2). Facial muscle activity was assessed in several facial regions in each patient for their respective muscle activity utilizing noninvasive surface-derived fEMG. Variables of interest were the average root mean square of three performed muscle contractions (= signal) (µV), mean root mean square between those contraction with the face in a relaxed facial expression (= baseline noise) (µV), and the signal to noise ratio (SNR). A total of 1,709 processed fEMG signals revealed one specific reliable location in each investigated region based on each muscle's anatomy, on the highest value of the SNR, on the lowest value for the baseline noise, and on the practicability to position the sensor while performing a facial expression. The results of this exploratory study may help guiding future researchers and practitioners in designing study protocols and measuring individual facial MUAP when utilizing fEMG. The locations presented herein were selected based on the measured parameters (SNR, signal, baseline noise) and on the practicability and reproducibility of sensor placement.


2021 ◽  
Vol 70 ◽  
pp. 1-10
Author(s):  
Sara Casaccia ◽  
Erik J. Sirevaag ◽  
Mark G. Frank ◽  
Joseph A. O'Sullivan ◽  
Lorenzo Scalise ◽  
...  

Author(s):  
Ahmad Hoirul Basori ◽  
Hani Moaiteq Abdullah AlJahdali

<p>The virtual human play vital roles in virtual reality and game. The process of Enriching the virtual human through their expression is one of the aspect that most researcher studied and improved. This study aims to demonstrate the combination of facial action units (FACS) and facial muscle to produce a realistic facial expression. The result of experiment succeed on producing particular expression such as anger, happy, sad which are able to convey the emotional state of the virtual human. This achievement is believed to bring full mental immersion towards virtual human and audience. The future works will able to generate a complex virtual human expression that combine physical factos such as wrinkle, fluid dynamics for tears or sweating.</p>


2019 ◽  
Vol 16 (6) ◽  
pp. 066029
Author(s):  
Gizem Yilmaz ◽  
Abdullah Salih Budan ◽  
Pekcan Ungan ◽  
Betilay Topkara ◽  
Kemal S Türker

2000 ◽  
Vol 7 (3) ◽  
pp. 156-168 ◽  
Author(s):  
Sheryl L. Reminger ◽  
Alfred W. Kaszniak ◽  
Patricia R. Dalby

Sign in / Sign up

Export Citation Format

Share Document