Analysis of Facial Expressions Made While Watching a Video Eliciting Compassion

2020 ◽  
Vol 127 (2) ◽  
pp. 317-346
Author(s):  
Martin Kanovský ◽  
Martina Baránková ◽  
Júlia Halamová ◽  
Bronislava Strnádelová ◽  
Jana Koróniová

The aim of the study was to describe the spontaneous facial expressions elicited by viewers of a compassionate video in terms of the respondents’ muscular activity of single facial action units (AUs). We recruited a convenience sample of 111 undergraduate psychology students, aged 18-25 years ( M = 20.53; SD = 1.62) to watch (at home alone) a short video stimulus eliciting compassion, and we recorded the respondents’ faces using webcams. We used both a manual analysis, based on the Facial Action Coding System, and an automatic analysis of the holistic recognition of facial expressions as obtained through EmotionID software. Manual facial analysis revealed that, during the compassionate moment of the video stimulus, AUs 1 =  inner-brow raiser, 4 =  brow lowerer, 7 =  lids tight, 17 =  chin raiser, 24 =  lip presser, and 55 =  head tilt left occurred more often than other AUs. These same AUs also occurred more often during the compassionate moment than during the baseline recording. Consistent with these findings, automatic facial analysis during the compassionate moment showed that anger occurred more often than other emotions; during the baseline moment, contempt occurred less often than other emotions. Further research is necessary to fully describe the facial expression of compassion.

Animals ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 1643
Author(s):  
Pia Haubro Andersen ◽  
Sofia Broomé ◽  
Maheen Rashid ◽  
Johan Lundblad ◽  
Katrina Ask ◽  
...  

Automated recognition of human facial expressions of pain and emotions is to a certain degree a solved problem, using approaches based on computer vision and machine learning. However, the application of such methods to horses has proven difficult. Major barriers are the lack of sufficiently large, annotated databases for horses and difficulties in obtaining correct classifications of pain because horses are non-verbal. This review describes our work to overcome these barriers, using two different approaches. One involves the use of a manual, but relatively objective, classification system for facial activity (Facial Action Coding System), where data are analyzed for pain expressions after coding using machine learning principles. We have devised tools that can aid manual labeling by identifying the faces and facial keypoints of horses. This approach provides promising results in the automated recognition of facial action units from images. The second approach, recurrent neural network end-to-end learning, requires less extraction of features and representations from the video but instead depends on large volumes of video data with ground truth. Our preliminary results suggest clearly that dynamics are important for pain recognition and show that combinations of recurrent neural networks can classify experimental pain in a small number of horses better than human raters.


2021 ◽  
Author(s):  
Alan S. Cowen ◽  
Kunalan Manokara ◽  
Xia Fang ◽  
Disa Sauter ◽  
Jeffrey A Brooks ◽  
...  

Central to science and technology are questions about how to measure facial expression. The current gold standard is the facial action coding system (FACS), which is often assumed to account for all facial muscle movements relevant to perceived emotion. However, the mapping from FACS codes to perceived emotion is not well understood. Six prototypical configurations of facial action units (AU) are sometimes assumed to account for perceived emotion, but this hypothesis remains largely untested. Here, using statistical modeling, we examine how FACS codes actually correspond to perceived emotions in a wide range of naturalistic expressions. Each of 1456 facial expressions was independently FACS coded by two experts (r = .84, κ = .84). Naive observers reported the emotions they perceived in each expression in many different ways, including emotions (N = 666); valence, arousal and appraisal dimensions (N =1116); authenticity (N = 121), and free response (N = 193). We find that facial expressions are much richer in meaning than typically assumed: At least 20 patterns of facial muscle movements captured by FACS have distinct perceived emotional meanings. Surprisingly, however, FACS codes do not offer a complete description of real-world facial expressions, capturing no more than half of the reliable variance in perceived emotion. Our findings suggest that the perceived emotional meanings of facial expressions are most accurately and efficiently represented using a wide range of carefully selected emotion concepts, such as the Cowen & Keltner (2019) taxonomy of 28 emotions. Further work is needed to characterize the anatomical bases of these facial expressions.


2018 ◽  
Vol 7 (3.20) ◽  
pp. 284
Author(s):  
Hamimah Ujir ◽  
Irwandi Hipiny ◽  
D N.F. Awang Iskandar

Most works in quantifying facial deformation are based on action units (AUs) provided by the Facial Action Coding System (FACS) which describes facial expressions in terms of forty-six component movements. AU corresponds to the movements of individual facial muscles. This paper presents a rule based approach to classify the AU which depends on certain facial features. This work only covers deformation of facial features based on posed Happy and the Sad expression obtained from the BU-4DFE database. Different studies refer to different combination of AUs that form Happy and Sad expression. According to the FACS rules lined in this work, an AU has more than one facial property that need to be observed. The intensity comparison and analysis on the AUs involved in Sad and Happy expression are presented. Additionally, dynamic analysis for AUs is studied to determine the temporal segment of expressions, i.e. duration of onset, apex and offset time. Our findings show that AU15, for sad expression, and AU12, for happy expression, show facial features deformation consistency for all properties during the expression period. However for AU1 and AU4, their properties’ intensity is different during the expression period. 


Author(s):  
Michel Valstar ◽  
Stefanos Zafeiriou ◽  
Maja Pantic

Automatic Facial Expression Analysis systems have come a long way since the earliest approaches in the early 1970s. We are now at a point where the first systems are commercially applied, most notably smile detectors included in digital cameras. As one of the most comprehensive and objective ways to describe facial expressions, the Facial Action Coding System (FACS) has received significant and sustained attention within the field. Over the past 30 years, psychologists and neuroscientists have conducted extensive research on various aspects of human behaviour using facial expression analysis coded in terms of FACS. Automating FACS coding would make this research faster and more widely applicable, opening up new avenues to understanding how we communicate through facial expressions. Mainly due to the cost effectiveness of existing recording equipment, until recently almost all work conducted in this area involves 2D imagery, despite their inherent problems relating to pose and illumination variations. In order to deal with these problems, 3D recordings are increasingly used in expression analysis research. In this chapter, the authors give an overview of 2D and 3D FACS recognition, and summarise current challenges and opportunities.


1995 ◽  
Vol 7 (4) ◽  
pp. 527-534 ◽  
Author(s):  
Kenneth Asplund ◽  
Lilian Jansson ◽  
Astrid Norberg

Two methods of interpreting the videotaped facial expressions of four patients with severe dementia of the Alzheimer type were compared. Interpretations of facial expressions performed by means of unstructured naturalistic judgements revealed episodes when the four patients exhibited anger, disgust, happiness, sadness, and surprise. When these episodes were assessed by use of modified version of the Facial Action Coding System, there was, in total, 48% agreement between the two methods. The highest agreement, 98%, occurred for happiness shown by one patient. It was concluded that more emotions could be judged by means of the unstructured naturalistic method, which is based on an awareness of the total situation that facilitates imputing meaning into the patients' cues. It is a difficult task to find a balance between imputing too much meaning into the severely demented patients' sparse and unclear cues and ignoring the possibility that there is some meaning to be interpreted.


CNS Spectrums ◽  
2019 ◽  
Vol 24 (1) ◽  
pp. 204-205
Author(s):  
Mina Boazak ◽  
Robert Cotes

AbstractIntroductionFacial expressivity in schizophrenia has been a topic of clinical interest for the past century. Besides the schizophrenia sufferers difficulty decoding the facial expressions of others, they often have difficulty encoding facial expressions. Traditionally, evaluations of facial expressions have been conducted by trained human observers using the facial action coding system. The process was slow and subject to intra and inter-observer variability. In the past decade the traditional facial action coding system developed by Ekman has been adapted for use in affective computing. Here we assess the applications of this adaptation for schizophrenia, the findings of current groups, and the future role of this technology.Materials and MethodsWe review the applications of computer vision technology in schizophrenia using pubmed and google scholar search criteria of “computer vision” AND “Schizophrenia” from January of 2010 to June of 2018.ResultsFive articles were selected for inclusion representing 1 case series and 4 case-control analysis. Authors assessed variations in facial action unit presence, intensity, various measures of length of activation, action unit clustering, congruence, and appropriateness. Findings point to variations in each of these areas, except action unit appropriateness, between control and schizophrenia patients. Computer vision techniques were also demonstrated to have high accuracy in classifying schizophrenia from control patients, reaching an AUC just under 0.9 in one study, and to predict psychometric scores, reaching pearson’s correlation values of under 0.7.DiscussionOur review of the literature demonstrates agreement in findings of traditional and contemporary assessment techniques of facial expressivity in schizophrenia. Our findings also demonstrate that current computer vision techniques have achieved capacity to differentiate schizophrenia from control populations and to predict psychometric scores. Nevertheless, the predictive accuracy of these technologies leaves room for growth. On analysis our group found two modifiable areas that may contribute to improving algorithm accuracy: assessment protocol and feature inclusion. Based on our review we recommend assessment of facial expressivity during a period of silence in addition to an assessment during a clinically structured interview utilizing emotionally evocative questions. Furthermore, where underfit is a problem we recommend progressive inclusion of features including action unit activation, intensity, action unit rate of onset and offset, clustering (including richness, distribution, and typicality), and congruence. Inclusion of each of these features may improve algorithm predictive accuracy.ConclusionWe review current applications of computer vision in the assessment of facial expressions in schizophrenia. We present the results of current innovative works in the field and discuss areas for continued development.


2020 ◽  
Author(s):  
Fernando Marmolejo-Ramos ◽  
Aiko Murata ◽  
Kyoshiro Sasaki ◽  
Yuki Yamada ◽  
Ayumi Ikeda ◽  
...  

In this research, we replicated the effect of muscle engagement on perception such that the recognition of another’s facial expressions was biased by the observer’s facial muscular activity (Blaesi & Wilson, 2010). We extended this replication to show that such a modulatory effect is also observed for the recognition of dynamic bodily expressions. Via a multi-lab and within-subjects approach, we investigated the emotion recognition of point-light biological walkers, along with that of morphed face stimuli, while subjects were or were not holding a pen in their teeth. Under the ‘pen-in-the-teeth’ condition, participants tended to lower their threshold of perception of ‘happy’ expressions in facial stimuli compared to the ‘no-pen’ condition; thus replicating the experiment by Blaesi and Wilson (2010). A similar effect was found for the biological motion stimuli such that participants lowered their threshold to perceive ‘happy’ walkers in the ‘pen-in-the-teeth’ compared to the ‘no-pen’ condition. This pattern of results was also found in a second experiment in which the ‘no-pen’ condition was replaced by a situation in which participants held a pen in their lips (‘pen-in-lips’ condition). These results suggested that facial muscular activity not only alters the recognition of facial expressions but also bodily expression.


Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4847
Author(s):  
Vianney Perez-Gomez ◽  
Homero V. Rios-Figueroa ◽  
Ericka Janet Rechy-Ramirez ◽  
Efrén Mezura-Montes ◽  
Antonio Marin-Hernandez

An essential aspect in the interaction between people and computers is the recognition of facial expressions. A key issue in this process is to select relevant features to classify facial expressions accurately. This study examines the selection of optimal geometric features to classify six basic facial expressions: happiness, sadness, surprise, fear, anger, and disgust. Inspired by the Facial Action Coding System (FACS) and the Moving Picture Experts Group 4th standard (MPEG-4), an initial set of 89 features was proposed. These features are normalized distances and angles in 2D and 3D computed from 22 facial landmarks. To select a minimum set of features with the maximum classification accuracy, two selection methods and four classifiers were tested. The first selection method, principal component analysis (PCA), obtained 39 features. The second selection method, a genetic algorithm (GA), obtained 47 features. The experiments ran on the Bosphorus and UIVBFED data sets with 86.62% and 93.92% median accuracy, respectively. Our main finding is that the reduced feature set obtained by the GA is the smallest in comparison with other methods of comparable accuracy. This has implications in reducing the time of recognition.


2022 ◽  
Vol 2022 ◽  
pp. 1-8
Author(s):  
Stefan Lautenbacher ◽  
Teena Hassan ◽  
Dominik Seuss ◽  
Frederik W. Loy ◽  
Jens-Uwe Garbas ◽  
...  

Introduction. The experience of pain is regularly accompanied by facial expressions. The gold standard for analyzing these facial expressions is the Facial Action Coding System (FACS), which provides so-called action units (AUs) as parametrical indicators of facial muscular activity. Particular combinations of AUs have appeared to be pain-indicative. The manual coding of AUs is, however, too time- and labor-intensive in clinical practice. New developments in automatic facial expression analysis have promised to enable automatic detection of AUs, which might be used for pain detection. Objective. Our aim is to compare manual with automatic AU coding of facial expressions of pain. Methods. FaceReader7 was used for automatic AU detection. We compared the performance of FaceReader7 using videos of 40 participants (20 younger with a mean age of 25.7 years and 20 older with a mean age of 52.1 years) undergoing experimentally induced heat pain to manually coded AUs as gold standard labeling. Percentages of correctly and falsely classified AUs were calculated, and we computed as indicators of congruency, “sensitivity/recall,” “precision,” and “overall agreement (F1).” Results. The automatic coding of AUs only showed poor to moderate outcomes regarding sensitivity/recall, precision, and F1. The congruency was better for younger compared to older faces and was better for pain-indicative AUs compared to other AUs. Conclusion. At the moment, automatic analyses of genuine facial expressions of pain may qualify at best as semiautomatic systems, which require further validation by human observers before they can be used to validly assess facial expressions of pain.


Sign in / Sign up

Export Citation Format

Share Document