scholarly journals For Now We See through an AI Darkly; but Then Face-to-Face: A Brief Survey of Emotion Recognition in Biometric Art

2020 ◽  
pp. 230-260
Author(s):  
Devon Schiller

Our knowledge about the facial expression of emotion may well be entering an age of scientific revolution. Conceptual models for facial behavior and emotion phenomena appear to be undergoing a paradigm shift brought on at least in part by advances made in facial recognition technology and automated facial expression analysis. And the use of technological labor by corporate, government, and institutional agents for extracting data capital from both the static morphology of the face and dynamic movement of the emotions is accelerating. Through a brief survey, the author seeks to introduce what he terms biometric art, a form of new media art on the cutting-edge between this advanced science and technology about the human face. In the last ten years, an increasing number of media artists in countries across the globe have been creating such biometric artworks. And today, awards, exhibitions, and festivals are starting to be dedicated to this new art form. The author explores the making of this biometric art as a critical practice in which artists investigate the roles played by science and technology in society, experimenting, for example, with Basic Emotions Theory, emotion artificial intelligence, and the Facial Action Coding System. Taking a comprehensive view of art, science, and technology, the author surveys the history of design for biometric art that uses facial recognition and emotion recognition, the individuals who create such art and the institutions that support it, as well as how this biometric art is made and what it is about. By so doing, the author contributes to the history, practice, and theory for the facial expression of emotion, sketching an interdisciplinary area of inquiry for further and future research, with relevance to academicians and creatives alike who question how we think about what we feel.

2010 ◽  
Vol 33 (6) ◽  
pp. 417-433 ◽  
Author(s):  
Paula M. Niedenthal ◽  
Martial Mermillod ◽  
Marcus Maringer ◽  
Ursula Hess

AbstractRecent application of theories of embodied or grounded cognition to the recognition and interpretation of facial expression of emotion has led to an explosion of research in psychology and the neurosciences. However, despite the accelerating number of reported findings, it remains unclear how the many component processes of emotion and their neural mechanisms actually support embodied simulation. Equally unclear is what triggers the use of embodied simulation versus perceptual or conceptual strategies in determining meaning. The present article integrates behavioral research from social psychology with recent research in neurosciences in order to provide coherence to the extant and future research on this topic. The roles of several of the brain's reward systems, and the amygdala, somatosensory cortices, and motor centers are examined. These are then linked to behavioral and brain research on facial mimicry and eye gaze. Articulation of the mediators and moderators of facial mimicry and gaze are particularly useful in guiding interpretation of relevant findings from neurosciences. Finally, a model of the processing of the smile, the most complex of the facial expressions, is presented as a means to illustrate how to advance the application of theories of embodied cognition in the study of facial expression of emotion.


Author(s):  
Klaus Scherer ◽  
Marcello Mortillaro ◽  
Marc Mehu

Emotion researchers generally concur that most emotions in humans and animals are elicited by the appraisals of events that are highly relevant for the organism, generating action tendencies that are often accompanied by changes in expression, autonomic physiology, and feeling. Scherer’s component process model of emotion (CPM) postulates that individual appraisal checks drive the dynamics and configuration of the facial expression of emotion and that emotion recognition is based on appraisal inference with consequent emotion attribution. This chapter outlines the model and reviews the accrued empirical evidence that supports these claims, covering studies that experimentally induced specific appraisals or that used induction of emotions with typical appraisal configurations (measuring facial expression via electromyographic recording) or behavioral coding of facial action units. In addition, recent studies analyzing the mechanisms of emotion recognition are shown to support the theoretical assumptions.


2020 ◽  
Author(s):  
Joshua W Maxwell ◽  
Eric Ruthruff ◽  
michael joseph

Are facial expressions of emotion processed automatically? Some authors have not found this to be the case (Tomasik et al., 2009). Here we revisited the question with a novel experimental logic – the backward correspondence effect (BCE). In three dual-task studies, participants first categorized a sound (Task 1) and then indicated the location of a target face (Task 2). In Experiment 1, Task 2 required participants to search for one facial expression of emotion (angry or happy). We observed positive BCEs, indicating that facial expressions of emotion bypassed the central attentional bottleneck and thus were processed in a capacity-free, automatic manner. In Experiment 2, we replicated this effect but found that morphed emotional expressions (which were used by Tomasik) were not processed automatically. In Experiment 3, we observed similar BCEs for another type of face processing previously shown to be capacity-free – identification of familiar faces (Jung et al., 2013). We conclude that facial expressions of emotion are identified automatically when sufficiently unambiguous.


2010 ◽  
Vol 33 (6) ◽  
pp. 464-480 ◽  
Author(s):  
Paula M. Niedenthal ◽  
Martial Mermillod ◽  
Marcus Maringer ◽  
Ursula Hess

AbstractThe set of 30 stimulating commentaries on our target article helps to define the areas of our initial position that should be reiterated or else made clearer and, more importantly, the ways in which moderators of and extensions to the SIMS can be imagined. In our response, we divide the areas of discussion into (1) a clarification of our meaning of “functional,” (2) a consideration of our proposed categories of smiles, (3) a reminder about the role of top-down processes in the interpretation of smile meaning in SIMS, (4) an evaluation of the role of eye contact in the interpretation of facial expression of emotion, and (5) an assessment of the possible moderators of the core SIMS model. We end with an appreciation of the proposed extensions to the model, and note that the future of research on the problem of the smile appears to us to be assured.


2009 ◽  
Vol 29 (48) ◽  
pp. 15089-15099 ◽  
Author(s):  
C. L. Philippi ◽  
S. Mehta ◽  
T. Grabowski ◽  
R. Adolphs ◽  
D. Rudrauf

1989 ◽  
pp. 204-221 ◽  
Author(s):  
Carlo Caltagirone ◽  
Pierluigi Zoccolotti ◽  
Giancarlo Originale ◽  
Antonio Daniele ◽  
Alessandra Mammucari

2019 ◽  
Vol 9 (21) ◽  
pp. 4678 ◽  
Author(s):  
Daniel Canedo ◽  
António J. R. Neves

Emotion recognition has attracted major attention in numerous fields because of its relevant applications in the contemporary world: marketing, psychology, surveillance, and entertainment are some examples. It is possible to recognize an emotion through several ways; however, this paper focuses on facial expressions, presenting a systematic review on the matter. In addition, 112 papers published in ACM, IEEE, BASE and Springer between January 2006 and April 2019 regarding this topic were extensively reviewed. Their most used methods and algorithms will be firstly introduced and summarized for a better understanding, such as face detection, smoothing, Principal Component Analysis (PCA), Local Binary Patterns (LBP), Optical Flow (OF), Gabor filters, among others. This review identified a clear difficulty in translating the high facial expression recognition (FER) accuracy in controlled environments to uncontrolled and pose-variant environments. The future efforts in the FER field should be put into multimodal systems that are robust enough to face the adversities of real world scenarios. A thorough analysis on the research done on FER in Computer Vision based on the selected papers is presented. This review aims to not only become a reference for future research on emotion recognition, but also to provide an overview of the work done in this topic for potential readers.


Sign in / Sign up

Export Citation Format

Share Document