scholarly journals Data-Driven Facial Expression Analysis from Live Video

2021 ◽  
Author(s):  
◽  
Wee Kiat Tay

<p>Emotion analytics is the study of human behavior by analyzing the responses when humans experience different emotions. In this thesis, we research into emotion analytics solutions using computer vision to detect emotions from facial expressions automatically using live video.  Considering anxiety is an emotion that can lead to more serious conditions like anxiety disorders and depression, we propose 2 hypotheses to detect anxiety from facial expressions. One hypothesis is that the complex emotion “anxiety” is a subset of the basic emotion “fear”. The other hypothesis is that anxiety can be distinguished from fear by differences in head and eye motion.  We test the first hypothesis by implementing a basic emotions detector based on facial action coding system (FACS) to detect fear from videos of anxious faces. When we discover that this is not as accurate as we would like, an alternative solution based on Gabor filters is implemented. A comparison is done between the solutions and the Gabor-based solution is found to be inferior.  The second hypothesis is tested by using scatter graphs and statistical analysis of the head and eye motions of videos for fear and anxiety expressions. It is found that head pitch has significant differences between fear and anxiety.  As a conclusion to the thesis, we implement a systems software using the basic emotions detector based on FACS and evaluate the software by comparing commercials using emotions detected from facial expressions of viewers.</p>

2021 ◽  
Author(s):  
◽  
Wee Kiat Tay

<p>Emotion analytics is the study of human behavior by analyzing the responses when humans experience different emotions. In this thesis, we research into emotion analytics solutions using computer vision to detect emotions from facial expressions automatically using live video.  Considering anxiety is an emotion that can lead to more serious conditions like anxiety disorders and depression, we propose 2 hypotheses to detect anxiety from facial expressions. One hypothesis is that the complex emotion “anxiety” is a subset of the basic emotion “fear”. The other hypothesis is that anxiety can be distinguished from fear by differences in head and eye motion.  We test the first hypothesis by implementing a basic emotions detector based on facial action coding system (FACS) to detect fear from videos of anxious faces. When we discover that this is not as accurate as we would like, an alternative solution based on Gabor filters is implemented. A comparison is done between the solutions and the Gabor-based solution is found to be inferior.  The second hypothesis is tested by using scatter graphs and statistical analysis of the head and eye motions of videos for fear and anxiety expressions. It is found that head pitch has significant differences between fear and anxiety.  As a conclusion to the thesis, we implement a systems software using the basic emotions detector based on FACS and evaluate the software by comparing commercials using emotions detected from facial expressions of viewers.</p>


Author(s):  
Michel Valstar ◽  
Stefanos Zafeiriou ◽  
Maja Pantic

Automatic Facial Expression Analysis systems have come a long way since the earliest approaches in the early 1970s. We are now at a point where the first systems are commercially applied, most notably smile detectors included in digital cameras. As one of the most comprehensive and objective ways to describe facial expressions, the Facial Action Coding System (FACS) has received significant and sustained attention within the field. Over the past 30 years, psychologists and neuroscientists have conducted extensive research on various aspects of human behaviour using facial expression analysis coded in terms of FACS. Automating FACS coding would make this research faster and more widely applicable, opening up new avenues to understanding how we communicate through facial expressions. Mainly due to the cost effectiveness of existing recording equipment, until recently almost all work conducted in this area involves 2D imagery, despite their inherent problems relating to pose and illumination variations. In order to deal with these problems, 3D recordings are increasingly used in expression analysis research. In this chapter, the authors give an overview of 2D and 3D FACS recognition, and summarise current challenges and opportunities.


2020 ◽  
Author(s):  
Meng Liu ◽  
Yaocong Duan ◽  
Robin A A Ince ◽  
Chaona Chen ◽  
Oliver G. B. Garrod ◽  
...  

One of the longest standing debates in the emotion sciences is whether emotions are represented as discrete categories such as happy or sad or as continuous fundamental dimensions such as valence and arousal. Theories of communication make specific predictions about the facial expression signals that would represent emotions as either discrete or dimensional messages. Here, we address this debate by testing whether facial expressions of emotion categories are embedded in a dimensional space of affective signals, leading to multiplexed communication of affective information. Using a data-driven method based on human perception, we modelled the facial expressions representing the six classic emotion categories – happy, surprise, fear, disgust, anger and sad – and those representing the dimensions of valence and arousal. We then evaluated their embedding by mapping and validating the facial expressions categories onto the valence-arousal space. Results showed that facial expressions of these six classic emotion categories formed dissociable clusters within the valence-arousal space, each located in semantically congruent regions (e.g., happy facial expressions distributed in positively valenced regions). Crucially, we further demonstrated the generalization of the embedding beyond the six classic categories, using a broader set of 19 complex emotion categories (e.g., delighted, fury, and terrified). Together, our results show that facial expressions of emotion categories comprise specific combinations of valence and arousal related face movements, suggesting a multiplexed signalling of categorical and dimensional affective information. Our results unite current theories of emotion representation to form the basis of a new framework of multiplexed communication of affective information.


2022 ◽  
Vol 2022 ◽  
pp. 1-8
Author(s):  
Stefan Lautenbacher ◽  
Teena Hassan ◽  
Dominik Seuss ◽  
Frederik W. Loy ◽  
Jens-Uwe Garbas ◽  
...  

Introduction. The experience of pain is regularly accompanied by facial expressions. The gold standard for analyzing these facial expressions is the Facial Action Coding System (FACS), which provides so-called action units (AUs) as parametrical indicators of facial muscular activity. Particular combinations of AUs have appeared to be pain-indicative. The manual coding of AUs is, however, too time- and labor-intensive in clinical practice. New developments in automatic facial expression analysis have promised to enable automatic detection of AUs, which might be used for pain detection. Objective. Our aim is to compare manual with automatic AU coding of facial expressions of pain. Methods. FaceReader7 was used for automatic AU detection. We compared the performance of FaceReader7 using videos of 40 participants (20 younger with a mean age of 25.7 years and 20 older with a mean age of 52.1 years) undergoing experimentally induced heat pain to manually coded AUs as gold standard labeling. Percentages of correctly and falsely classified AUs were calculated, and we computed as indicators of congruency, “sensitivity/recall,” “precision,” and “overall agreement (F1).” Results. The automatic coding of AUs only showed poor to moderate outcomes regarding sensitivity/recall, precision, and F1. The congruency was better for younger compared to older faces and was better for pain-indicative AUs compared to other AUs. Conclusion. At the moment, automatic analyses of genuine facial expressions of pain may qualify at best as semiautomatic systems, which require further validation by human observers before they can be used to validly assess facial expressions of pain.


2016 ◽  
Vol 37 (1) ◽  
pp. 16-23 ◽  
Author(s):  
Chit Yuen Yi ◽  
Matthew W. E. Murry ◽  
Amy L. Gentzler

Abstract. Past research suggests that transient mood influences the perception of facial expressions of emotion, but relatively little is known about how trait-level emotionality (i.e., temperament) may influence emotion perception or interact with mood in this process. Consequently, we extended earlier work by examining how temperamental dimensions of negative emotionality and extraversion were associated with the perception accuracy and perceived intensity of three basic emotions and how the trait-level temperamental effect interacted with state-level self-reported mood in a sample of 88 adults (27 men, 18–51 years of age). The results indicated that higher levels of negative mood were associated with higher perception accuracy of angry and sad facial expressions, and higher levels of perceived intensity of anger. For perceived intensity of sadness, negative mood was associated with lower levels of perceived intensity, whereas negative emotionality was associated with higher levels of perceived intensity of sadness. Overall, our findings added to the limited literature on adult temperament and emotion perception.


Author(s):  
Sophie Jörg ◽  
Andrew T. Duchowski ◽  
Krzysztof Krejtz ◽  
Anna Niedzielska

2021 ◽  
Vol 5 (3) ◽  
pp. 13
Author(s):  
Heting Wang ◽  
Vidya Gaddy ◽  
James Ross Beveridge ◽  
Francisco R. Ortega

The role of affect has been long studied in human–computer interactions. Unlike previous studies that focused on seven basic emotions, an avatar named Diana was introduced who expresses a higher level of emotional intelligence. To adapt to the users various affects during interaction, Diana simulates emotions with dynamic facial expressions. When two people collaborated to build blocks, their affects were recognized and labeled using the Affdex SDK and a descriptive analysis was provided. When participants turned to collaborate with Diana, their subjective responses were collected and the length of completion was recorded. Three modes of Diana were involved: a flat-faced Diana, a Diana that used mimicry facial expressions, and a Diana that used emotionally responsive facial expressions. Twenty-one responses were collected through a five-point Likert scale questionnaire and the NASA TLX. Results from questionnaires were not statistically different. However, the emotionally responsive Diana obtained more positive responses, and people spent the longest time with the mimicry Diana. In post-study comments, most participants perceived facial expressions on Diana’s face as natural, four mentioned uncomfortable feelings caused by the Uncanny Valley effect.


2017 ◽  
Vol 3 (2) ◽  
pp. 735-738
Author(s):  
Wolfgang Doneit ◽  
Jana Lohse ◽  
Kristina Glesing ◽  
Clarissa Simon ◽  
Monika Fischer ◽  
...  

AbstractIn the project I-CARE a technical system for tablet devices is developed that captures the personal needs and skills of people with dementia. The system provides activation content such as music videos, biographical photographs and quizzes on various topics of interest to people with dementia, their families and professional caregivers. To adapt the system, the activation content is adjusted to the daily condition of individual users. For this purpose, emotions are automatically detected through facial expressions, motion, and voice. The daily interactions of the users with the tablet devices are documented in log files which can be merged into an event list. In this paper, we propose an advanced format for event lists and a data analysis strategy. A transformation scheme is developed in order to obtain datasets with features and time series for popular methods of data mining. The proposed methods are applied to analysing the interactions of people with dementia with the I-CARE tablet device. We show how the new format of event lists and the innovative transformation scheme can be used to compress the stored data, to identify groups of users, and to model changes of user behaviour. As the I-CARE user studies are still ongoing, simulated benchmark log files are applied to illustrate the data mining strategy. We discuss possible solutions to challenges that appear in the context of I-CARE and that are relevant to a broad range of applications.


2014 ◽  
Vol 20 (3) ◽  
pp. 302-312 ◽  
Author(s):  
Aleksey I. Dumer ◽  
Harriet Oster ◽  
David McCabe ◽  
Laura A. Rabin ◽  
Jennifer L. Spielman ◽  
...  

AbstractGiven associations between facial movement and voice, the potential of the Lee Silverman Voice Treatment (LSVT) to alleviate decreased facial expressivity, termed hypomimia, in Parkinson's disease (PD) was examined. Fifty-six participants—16 PD participants who underwent LSVT, 12 PD participants who underwent articulation treatment (ARTIC), 17 untreated PD participants, and 11 controls without PD—produced monologues about happy emotional experiences at pre- and post-treatment timepoints (“T1” and “T2,” respectively), 1 month apart. The groups of LSVT, ARTIC, and untreated PD participants were matched on demographic and health status variables. The frequency and variability of facial expressions (Frequency and Variability) observable on 1-min monologue videorecordings were measured using the Facial Action Coding System (FACS). At T1, the Frequency and Variability of participants with PD were significantly lower than those of controls. Frequency and Variability increases of LSVT participants from T1 to T2 were significantly greater than those of ARTIC or untreated participants. Whereas the Frequency and Variability of ARTIC participants at T2 were significantly lower than those of controls, LSVT participants did not significantly differ from controls on these variables at T2. The implications of these findings, which suggest that LSVT reduces parkinsonian hypomimia, for PD-related psychosocial problems are considered. (JINS, 2014, 20, 1–11)


Sign in / Sign up

Export Citation Format

Share Document