Facial Expression Is Driven by Appraisal and Generates Appraisal Inference

Author(s):  
Klaus Scherer ◽  
Marcello Mortillaro ◽  
Marc Mehu

Emotion researchers generally concur that most emotions in humans and animals are elicited by the appraisals of events that are highly relevant for the organism, generating action tendencies that are often accompanied by changes in expression, autonomic physiology, and feeling. Scherer’s component process model of emotion (CPM) postulates that individual appraisal checks drive the dynamics and configuration of the facial expression of emotion and that emotion recognition is based on appraisal inference with consequent emotion attribution. This chapter outlines the model and reviews the accrued empirical evidence that supports these claims, covering studies that experimentally induced specific appraisals or that used induction of emotions with typical appraisal configurations (measuring facial expression via electromyographic recording) or behavioral coding of facial action units. In addition, recent studies analyzing the mechanisms of emotion recognition are shown to support the theoretical assumptions.

2020 ◽  
pp. 230-260
Author(s):  
Devon Schiller

Our knowledge about the facial expression of emotion may well be entering an age of scientific revolution. Conceptual models for facial behavior and emotion phenomena appear to be undergoing a paradigm shift brought on at least in part by advances made in facial recognition technology and automated facial expression analysis. And the use of technological labor by corporate, government, and institutional agents for extracting data capital from both the static morphology of the face and dynamic movement of the emotions is accelerating. Through a brief survey, the author seeks to introduce what he terms biometric art, a form of new media art on the cutting-edge between this advanced science and technology about the human face. In the last ten years, an increasing number of media artists in countries across the globe have been creating such biometric artworks. And today, awards, exhibitions, and festivals are starting to be dedicated to this new art form. The author explores the making of this biometric art as a critical practice in which artists investigate the roles played by science and technology in society, experimenting, for example, with Basic Emotions Theory, emotion artificial intelligence, and the Facial Action Coding System. Taking a comprehensive view of art, science, and technology, the author surveys the history of design for biometric art that uses facial recognition and emotion recognition, the individuals who create such art and the institutions that support it, as well as how this biometric art is made and what it is about. By so doing, the author contributes to the history, practice, and theory for the facial expression of emotion, sketching an interdisciplinary area of inquiry for further and future research, with relevance to academicians and creatives alike who question how we think about what we feel.


2020 ◽  
Author(s):  
Joshua W Maxwell ◽  
Eric Ruthruff ◽  
michael joseph

Are facial expressions of emotion processed automatically? Some authors have not found this to be the case (Tomasik et al., 2009). Here we revisited the question with a novel experimental logic – the backward correspondence effect (BCE). In three dual-task studies, participants first categorized a sound (Task 1) and then indicated the location of a target face (Task 2). In Experiment 1, Task 2 required participants to search for one facial expression of emotion (angry or happy). We observed positive BCEs, indicating that facial expressions of emotion bypassed the central attentional bottleneck and thus were processed in a capacity-free, automatic manner. In Experiment 2, we replicated this effect but found that morphed emotional expressions (which were used by Tomasik) were not processed automatically. In Experiment 3, we observed similar BCEs for another type of face processing previously shown to be capacity-free – identification of familiar faces (Jung et al., 2013). We conclude that facial expressions of emotion are identified automatically when sufficiently unambiguous.


2010 ◽  
Vol 33 (6) ◽  
pp. 464-480 ◽  
Author(s):  
Paula M. Niedenthal ◽  
Martial Mermillod ◽  
Marcus Maringer ◽  
Ursula Hess

AbstractThe set of 30 stimulating commentaries on our target article helps to define the areas of our initial position that should be reiterated or else made clearer and, more importantly, the ways in which moderators of and extensions to the SIMS can be imagined. In our response, we divide the areas of discussion into (1) a clarification of our meaning of “functional,” (2) a consideration of our proposed categories of smiles, (3) a reminder about the role of top-down processes in the interpretation of smile meaning in SIMS, (4) an evaluation of the role of eye contact in the interpretation of facial expression of emotion, and (5) an assessment of the possible moderators of the core SIMS model. We end with an appreciation of the proposed extensions to the model, and note that the future of research on the problem of the smile appears to us to be assured.


2009 ◽  
Vol 29 (48) ◽  
pp. 15089-15099 ◽  
Author(s):  
C. L. Philippi ◽  
S. Mehta ◽  
T. Grabowski ◽  
R. Adolphs ◽  
D. Rudrauf

1989 ◽  
pp. 204-221 ◽  
Author(s):  
Carlo Caltagirone ◽  
Pierluigi Zoccolotti ◽  
Giancarlo Originale ◽  
Antonio Daniele ◽  
Alessandra Mammucari

Author(s):  
Paula M. Niedenthal ◽  
Jamin B. Halberstadt ◽  
Jonathan Margolin ◽  
�se H. Innes-Ker

Sign in / Sign up

Export Citation Format

Share Document