scholarly journals Facial movements have over twenty dimensions of perceived meaning that are only partially captured with traditional methods

2021 ◽  
Author(s):  
Alan S. Cowen ◽  
Kunalan Manokara ◽  
Xia Fang ◽  
Disa Sauter ◽  
Jeffrey A Brooks ◽  
...  

Central to science and technology are questions about how to measure facial expression. The current gold standard is the facial action coding system (FACS), which is often assumed to account for all facial muscle movements relevant to perceived emotion. However, the mapping from FACS codes to perceived emotion is not well understood. Six prototypical configurations of facial action units (AU) are sometimes assumed to account for perceived emotion, but this hypothesis remains largely untested. Here, using statistical modeling, we examine how FACS codes actually correspond to perceived emotions in a wide range of naturalistic expressions. Each of 1456 facial expressions was independently FACS coded by two experts (r = .84, κ = .84). Naive observers reported the emotions they perceived in each expression in many different ways, including emotions (N = 666); valence, arousal and appraisal dimensions (N =1116); authenticity (N = 121), and free response (N = 193). We find that facial expressions are much richer in meaning than typically assumed: At least 20 patterns of facial muscle movements captured by FACS have distinct perceived emotional meanings. Surprisingly, however, FACS codes do not offer a complete description of real-world facial expressions, capturing no more than half of the reliable variance in perceived emotion. Our findings suggest that the perceived emotional meanings of facial expressions are most accurately and efficiently represented using a wide range of carefully selected emotion concepts, such as the Cowen & Keltner (2019) taxonomy of 28 emotions. Further work is needed to characterize the anatomical bases of these facial expressions.

2021 ◽  
Vol 39 (2A) ◽  
pp. 316-325
Author(s):  
Fatima I. Yasser ◽  
Bassam H. Abd ◽  
Saad M. Abbas

Confusion detection systems (CDSs) that need Noninvasive, mobile, and cost-effective methods use facial expressions as a technique to detect confusion. In previous works, the technology that the system used represents a major gap between this proposed CDS and other systems. This CDS depends on the Facial Action Coding System (FACS) that is used to extract facial features. The FACS shows the motion of the facial muscles represented by Action Units (AUs); the movement is represented with one facial muscle or more. Seven AUs are used as possible markers for detecting confusion that has been implemented in the form of a single vector of facial action; the AUs that have been used in this work are AUs 4, 5, 6, 7, 10, 12, and 23. The database used to calculate the performance of the proposed CDS is gathered from 120 participants (91males, 29 females), between the ages of 18-45. Four types of classification algorithms are used as individuals; these classifiers are (VG-RAM), (SVM), Logistic Regression and Quadratic Discriminant classifiers. The best success rate was found when using Logistic Regression and Quadratic Discriminant. This work introduces different classification techniques to detect confusion by collecting an actual database that can be used to evaluate the performance for every CDS employing facial expressions and selecting appropriate facial features.


2018 ◽  
Vol 7 (3.20) ◽  
pp. 284
Author(s):  
Hamimah Ujir ◽  
Irwandi Hipiny ◽  
D N.F. Awang Iskandar

Most works in quantifying facial deformation are based on action units (AUs) provided by the Facial Action Coding System (FACS) which describes facial expressions in terms of forty-six component movements. AU corresponds to the movements of individual facial muscles. This paper presents a rule based approach to classify the AU which depends on certain facial features. This work only covers deformation of facial features based on posed Happy and the Sad expression obtained from the BU-4DFE database. Different studies refer to different combination of AUs that form Happy and Sad expression. According to the FACS rules lined in this work, an AU has more than one facial property that need to be observed. The intensity comparison and analysis on the AUs involved in Sad and Happy expression are presented. Additionally, dynamic analysis for AUs is studied to determine the temporal segment of expressions, i.e. duration of onset, apex and offset time. Our findings show that AU15, for sad expression, and AU12, for happy expression, show facial features deformation consistency for all properties during the expression period. However for AU1 and AU4, their properties’ intensity is different during the expression period. 


Author(s):  
Michel Valstar ◽  
Stefanos Zafeiriou ◽  
Maja Pantic

Automatic Facial Expression Analysis systems have come a long way since the earliest approaches in the early 1970s. We are now at a point where the first systems are commercially applied, most notably smile detectors included in digital cameras. As one of the most comprehensive and objective ways to describe facial expressions, the Facial Action Coding System (FACS) has received significant and sustained attention within the field. Over the past 30 years, psychologists and neuroscientists have conducted extensive research on various aspects of human behaviour using facial expression analysis coded in terms of FACS. Automating FACS coding would make this research faster and more widely applicable, opening up new avenues to understanding how we communicate through facial expressions. Mainly due to the cost effectiveness of existing recording equipment, until recently almost all work conducted in this area involves 2D imagery, despite their inherent problems relating to pose and illumination variations. In order to deal with these problems, 3D recordings are increasingly used in expression analysis research. In this chapter, the authors give an overview of 2D and 3D FACS recognition, and summarise current challenges and opportunities.


1995 ◽  
Vol 7 (4) ◽  
pp. 527-534 ◽  
Author(s):  
Kenneth Asplund ◽  
Lilian Jansson ◽  
Astrid Norberg

Two methods of interpreting the videotaped facial expressions of four patients with severe dementia of the Alzheimer type were compared. Interpretations of facial expressions performed by means of unstructured naturalistic judgements revealed episodes when the four patients exhibited anger, disgust, happiness, sadness, and surprise. When these episodes were assessed by use of modified version of the Facial Action Coding System, there was, in total, 48% agreement between the two methods. The highest agreement, 98%, occurred for happiness shown by one patient. It was concluded that more emotions could be judged by means of the unstructured naturalistic method, which is based on an awareness of the total situation that facilitates imputing meaning into the patients' cues. It is a difficult task to find a balance between imputing too much meaning into the severely demented patients' sparse and unclear cues and ignoring the possibility that there is some meaning to be interpreted.


CNS Spectrums ◽  
2019 ◽  
Vol 24 (1) ◽  
pp. 204-205
Author(s):  
Mina Boazak ◽  
Robert Cotes

AbstractIntroductionFacial expressivity in schizophrenia has been a topic of clinical interest for the past century. Besides the schizophrenia sufferers difficulty decoding the facial expressions of others, they often have difficulty encoding facial expressions. Traditionally, evaluations of facial expressions have been conducted by trained human observers using the facial action coding system. The process was slow and subject to intra and inter-observer variability. In the past decade the traditional facial action coding system developed by Ekman has been adapted for use in affective computing. Here we assess the applications of this adaptation for schizophrenia, the findings of current groups, and the future role of this technology.Materials and MethodsWe review the applications of computer vision technology in schizophrenia using pubmed and google scholar search criteria of “computer vision” AND “Schizophrenia” from January of 2010 to June of 2018.ResultsFive articles were selected for inclusion representing 1 case series and 4 case-control analysis. Authors assessed variations in facial action unit presence, intensity, various measures of length of activation, action unit clustering, congruence, and appropriateness. Findings point to variations in each of these areas, except action unit appropriateness, between control and schizophrenia patients. Computer vision techniques were also demonstrated to have high accuracy in classifying schizophrenia from control patients, reaching an AUC just under 0.9 in one study, and to predict psychometric scores, reaching pearson’s correlation values of under 0.7.DiscussionOur review of the literature demonstrates agreement in findings of traditional and contemporary assessment techniques of facial expressivity in schizophrenia. Our findings also demonstrate that current computer vision techniques have achieved capacity to differentiate schizophrenia from control populations and to predict psychometric scores. Nevertheless, the predictive accuracy of these technologies leaves room for growth. On analysis our group found two modifiable areas that may contribute to improving algorithm accuracy: assessment protocol and feature inclusion. Based on our review we recommend assessment of facial expressivity during a period of silence in addition to an assessment during a clinically structured interview utilizing emotionally evocative questions. Furthermore, where underfit is a problem we recommend progressive inclusion of features including action unit activation, intensity, action unit rate of onset and offset, clustering (including richness, distribution, and typicality), and congruence. Inclusion of each of these features may improve algorithm predictive accuracy.ConclusionWe review current applications of computer vision in the assessment of facial expressions in schizophrenia. We present the results of current innovative works in the field and discuss areas for continued development.


2021 ◽  
Author(s):  
Katlyn Peck

When individuals are presented with emotional facial expressions they spontaneously react with brief, distinct facial movements that ‘mimic’ the presented faces. While the effects of facial mimicry on emotional perception and social bonding have been well documented, the role of facial attractiveness on the elicitation of facial mimicry is unknown. We hypothesized that facial mimicry would increase with more attractive faces. Facial movements were recorded with electromyography upon presentation of averaged and original stimuli while ratings of attractiveness and intensity were obtained. In line with existing findings, emotionally congruent responses were observed in relevant facial muscle regions. Unexpectedly, the strength of observers’ facial mimicry responses decreased with more averaged faces, despite being rated perceptually as more attractive. These findings suggest that facial attractiveness moderates the degree of facial mimicry muscle movements elicited in observers. The relationship between averageness, attractiveness and mimicry is discussed in light of this counterintuitive finding.


Author(s):  
Dhruv Verma ◽  
Sejal Bhalla ◽  
Dhruv Sahnan ◽  
Jainendra Shukla ◽  
Aman Parnami

Continuous and unobtrusive monitoring of facial expressions holds tremendous potential to enable compelling applications in a multitude of domains ranging from healthcare and education to interactive systems. Traditional, vision-based facial expression recognition (FER) methods, however, are vulnerable to external factors like occlusion and lighting, while also raising privacy concerns coupled with the impractical requirement of positioning the camera in front of the user at all times. To bridge this gap, we propose ExpressEar, a novel FER system that repurposes commercial earables augmented with inertial sensors to capture fine-grained facial muscle movements. Following the Facial Action Coding System (FACS), which encodes every possible expression in terms of constituent facial movements called Action Units (AUs), ExpressEar identifies facial expressions at the atomic level. We conducted a user study (N=12) to evaluate the performance of our approach and found that ExpressEar can detect and distinguish between 32 Facial AUs (including 2 variants of asymmetric AUs), with an average accuracy of 89.9% for any given user. We further quantify the performance across different mobile scenarios in presence of additional face-related activities. Our results demonstrate ExpressEar's applicability in the real world and open up research opportunities to advance its practical adoption.


2022 ◽  
Vol 2022 ◽  
pp. 1-8
Author(s):  
Stefan Lautenbacher ◽  
Teena Hassan ◽  
Dominik Seuss ◽  
Frederik W. Loy ◽  
Jens-Uwe Garbas ◽  
...  

Introduction. The experience of pain is regularly accompanied by facial expressions. The gold standard for analyzing these facial expressions is the Facial Action Coding System (FACS), which provides so-called action units (AUs) as parametrical indicators of facial muscular activity. Particular combinations of AUs have appeared to be pain-indicative. The manual coding of AUs is, however, too time- and labor-intensive in clinical practice. New developments in automatic facial expression analysis have promised to enable automatic detection of AUs, which might be used for pain detection. Objective. Our aim is to compare manual with automatic AU coding of facial expressions of pain. Methods. FaceReader7 was used for automatic AU detection. We compared the performance of FaceReader7 using videos of 40 participants (20 younger with a mean age of 25.7 years and 20 older with a mean age of 52.1 years) undergoing experimentally induced heat pain to manually coded AUs as gold standard labeling. Percentages of correctly and falsely classified AUs were calculated, and we computed as indicators of congruency, “sensitivity/recall,” “precision,” and “overall agreement (F1).” Results. The automatic coding of AUs only showed poor to moderate outcomes regarding sensitivity/recall, precision, and F1. The congruency was better for younger compared to older faces and was better for pain-indicative AUs compared to other AUs. Conclusion. At the moment, automatic analyses of genuine facial expressions of pain may qualify at best as semiautomatic systems, which require further validation by human observers before they can be used to validly assess facial expressions of pain.


Sign in / Sign up

Export Citation Format

Share Document