scholarly journals Automatic Coding of Facial Expressions of Pain: Are We There Yet?

2022 ◽  
Vol 2022 ◽  
pp. 1-8
Author(s):  
Stefan Lautenbacher ◽  
Teena Hassan ◽  
Dominik Seuss ◽  
Frederik W. Loy ◽  
Jens-Uwe Garbas ◽  
...  

Introduction. The experience of pain is regularly accompanied by facial expressions. The gold standard for analyzing these facial expressions is the Facial Action Coding System (FACS), which provides so-called action units (AUs) as parametrical indicators of facial muscular activity. Particular combinations of AUs have appeared to be pain-indicative. The manual coding of AUs is, however, too time- and labor-intensive in clinical practice. New developments in automatic facial expression analysis have promised to enable automatic detection of AUs, which might be used for pain detection. Objective. Our aim is to compare manual with automatic AU coding of facial expressions of pain. Methods. FaceReader7 was used for automatic AU detection. We compared the performance of FaceReader7 using videos of 40 participants (20 younger with a mean age of 25.7 years and 20 older with a mean age of 52.1 years) undergoing experimentally induced heat pain to manually coded AUs as gold standard labeling. Percentages of correctly and falsely classified AUs were calculated, and we computed as indicators of congruency, “sensitivity/recall,” “precision,” and “overall agreement (F1).” Results. The automatic coding of AUs only showed poor to moderate outcomes regarding sensitivity/recall, precision, and F1. The congruency was better for younger compared to older faces and was better for pain-indicative AUs compared to other AUs. Conclusion. At the moment, automatic analyses of genuine facial expressions of pain may qualify at best as semiautomatic systems, which require further validation by human observers before they can be used to validly assess facial expressions of pain.

2011 ◽  
pp. 255-317 ◽  
Author(s):  
Daijin Kim ◽  
Jaewon Sung

The facial expression has long been an interest for psychology, since Darwin published The expression of Emotions in Man and Animals (Darwin, C., 1899). Psychologists have studied to reveal the role and mechanism of the facial expression. One of the great discoveries of Darwin is that there exist prototypical facial expressions across multiple cultures on the earth, which provided the theoretical backgrounds for the vision researchers who tried to classify categories of the prototypical facial expressions from images. The representative 6 facial expressions are afraid, happy, sad, surprised, angry, and disgust (Mase, 1991; Yacoob and Davis, 1994). On the other hand, real facial expressions that we frequently meet in daily life consist of lots of distinct signals, which are subtly different. Further research on facial expressions required an object method to describe and measure the distinct activity of facial muscles. The facial action coding system (FACS), proposed by Hager and Ekman (1978), defines 46 distinct action units (AUs), each of which explains the activity of each distinct muscle or muscle group. The development of the objective description method also affected the vision researchers, who tried to detect the emergence of each AU (Tian et. al., 2001).


2020 ◽  
pp. 59-69
Author(s):  
Walid Mahmod ◽  
Jane Stephan ◽  
Anmar Razzak

Automatic analysis of facial expressions is rapidly becoming an area of intense interest in computer vision and artificial intelligence research communities. In this paper an approach is presented for facial expression recognition of the six basic prototype expressions (i.e., joy, surprise, anger, sadness, fear, and disgust) based on Facial Action Coding System (FACS). The approach is attempting to utilize a combination of different transforms (Walid let hybrid transform); they consist of Fast Fourier Transform; Radon transform and Multiwavelet transform for the feature extraction. Korhonen Self Organizing Feature Map (SOFM) then used for patterns clustering based on the features obtained from the hybrid transform above. The result shows that the method has very good accuracy in facial expression recognition. However, the proposed method has many promising features that make it interesting. The approach provides a new method of feature extraction in which overcome the problem of the illumination, faces that varies from one individual to another quite considerably due to different age, ethnicity, gender and cosmetic also it does not require a precise normalization and lighting equalization. An average clustering accuracy of 94.8% is achieved for six basic expressions, where different databases had been used for the test of the method.


2021 ◽  
Author(s):  
Alan S. Cowen ◽  
Kunalan Manokara ◽  
Xia Fang ◽  
Disa Sauter ◽  
Jeffrey A Brooks ◽  
...  

Central to science and technology are questions about how to measure facial expression. The current gold standard is the facial action coding system (FACS), which is often assumed to account for all facial muscle movements relevant to perceived emotion. However, the mapping from FACS codes to perceived emotion is not well understood. Six prototypical configurations of facial action units (AU) are sometimes assumed to account for perceived emotion, but this hypothesis remains largely untested. Here, using statistical modeling, we examine how FACS codes actually correspond to perceived emotions in a wide range of naturalistic expressions. Each of 1456 facial expressions was independently FACS coded by two experts (r = .84, κ = .84). Naive observers reported the emotions they perceived in each expression in many different ways, including emotions (N = 666); valence, arousal and appraisal dimensions (N =1116); authenticity (N = 121), and free response (N = 193). We find that facial expressions are much richer in meaning than typically assumed: At least 20 patterns of facial muscle movements captured by FACS have distinct perceived emotional meanings. Surprisingly, however, FACS codes do not offer a complete description of real-world facial expressions, capturing no more than half of the reliable variance in perceived emotion. Our findings suggest that the perceived emotional meanings of facial expressions are most accurately and efficiently represented using a wide range of carefully selected emotion concepts, such as the Cowen & Keltner (2019) taxonomy of 28 emotions. Further work is needed to characterize the anatomical bases of these facial expressions.


Author(s):  
Michel Valstar ◽  
Stefanos Zafeiriou ◽  
Maja Pantic

Automatic Facial Expression Analysis systems have come a long way since the earliest approaches in the early 1970s. We are now at a point where the first systems are commercially applied, most notably smile detectors included in digital cameras. As one of the most comprehensive and objective ways to describe facial expressions, the Facial Action Coding System (FACS) has received significant and sustained attention within the field. Over the past 30 years, psychologists and neuroscientists have conducted extensive research on various aspects of human behaviour using facial expression analysis coded in terms of FACS. Automating FACS coding would make this research faster and more widely applicable, opening up new avenues to understanding how we communicate through facial expressions. Mainly due to the cost effectiveness of existing recording equipment, until recently almost all work conducted in this area involves 2D imagery, despite their inherent problems relating to pose and illumination variations. In order to deal with these problems, 3D recordings are increasingly used in expression analysis research. In this chapter, the authors give an overview of 2D and 3D FACS recognition, and summarise current challenges and opportunities.


1995 ◽  
Vol 7 (4) ◽  
pp. 527-534 ◽  
Author(s):  
Kenneth Asplund ◽  
Lilian Jansson ◽  
Astrid Norberg

Two methods of interpreting the videotaped facial expressions of four patients with severe dementia of the Alzheimer type were compared. Interpretations of facial expressions performed by means of unstructured naturalistic judgements revealed episodes when the four patients exhibited anger, disgust, happiness, sadness, and surprise. When these episodes were assessed by use of modified version of the Facial Action Coding System, there was, in total, 48% agreement between the two methods. The highest agreement, 98%, occurred for happiness shown by one patient. It was concluded that more emotions could be judged by means of the unstructured naturalistic method, which is based on an awareness of the total situation that facilitates imputing meaning into the patients' cues. It is a difficult task to find a balance between imputing too much meaning into the severely demented patients' sparse and unclear cues and ignoring the possibility that there is some meaning to be interpreted.


CNS Spectrums ◽  
2019 ◽  
Vol 24 (1) ◽  
pp. 204-205
Author(s):  
Mina Boazak ◽  
Robert Cotes

AbstractIntroductionFacial expressivity in schizophrenia has been a topic of clinical interest for the past century. Besides the schizophrenia sufferers difficulty decoding the facial expressions of others, they often have difficulty encoding facial expressions. Traditionally, evaluations of facial expressions have been conducted by trained human observers using the facial action coding system. The process was slow and subject to intra and inter-observer variability. In the past decade the traditional facial action coding system developed by Ekman has been adapted for use in affective computing. Here we assess the applications of this adaptation for schizophrenia, the findings of current groups, and the future role of this technology.Materials and MethodsWe review the applications of computer vision technology in schizophrenia using pubmed and google scholar search criteria of “computer vision” AND “Schizophrenia” from January of 2010 to June of 2018.ResultsFive articles were selected for inclusion representing 1 case series and 4 case-control analysis. Authors assessed variations in facial action unit presence, intensity, various measures of length of activation, action unit clustering, congruence, and appropriateness. Findings point to variations in each of these areas, except action unit appropriateness, between control and schizophrenia patients. Computer vision techniques were also demonstrated to have high accuracy in classifying schizophrenia from control patients, reaching an AUC just under 0.9 in one study, and to predict psychometric scores, reaching pearson’s correlation values of under 0.7.DiscussionOur review of the literature demonstrates agreement in findings of traditional and contemporary assessment techniques of facial expressivity in schizophrenia. Our findings also demonstrate that current computer vision techniques have achieved capacity to differentiate schizophrenia from control populations and to predict psychometric scores. Nevertheless, the predictive accuracy of these technologies leaves room for growth. On analysis our group found two modifiable areas that may contribute to improving algorithm accuracy: assessment protocol and feature inclusion. Based on our review we recommend assessment of facial expressivity during a period of silence in addition to an assessment during a clinically structured interview utilizing emotionally evocative questions. Furthermore, where underfit is a problem we recommend progressive inclusion of features including action unit activation, intensity, action unit rate of onset and offset, clustering (including richness, distribution, and typicality), and congruence. Inclusion of each of these features may improve algorithm predictive accuracy.ConclusionWe review current applications of computer vision in the assessment of facial expressions in schizophrenia. We present the results of current innovative works in the field and discuss areas for continued development.


2021 ◽  
Vol 12 ◽  
Author(s):  
Jennifer M. B. Fugate ◽  
Courtny L. Franco

Emoji faces, which are ubiquitous in our everyday communication, are thought to resemble human faces and aid emotional communication. Yet, few studies examine whether emojis are perceived as a particular emotion and whether that perception changes based on rendering differences across electronic platforms. The current paper draws upon emotion theory to evaluate whether emoji faces depict anatomical differences that are proposed to differentiate human depictions of emotion (hereafter, “facial expressions”). We modified the existing Facial Action Coding System (FACS) (Ekman and Rosenberg, 1997) to apply to emoji faces. An equivalent “emoji FACS” rubric allowed us to evaluate two important questions: First, Anatomically, does the same emoji face “look” the same across platforms and versions? Second, Do emoji faces perceived as a particular emotion category resemble the proposed human facial expression for that emotion? To answer these questions, we compared the anatomically based codes for 31 emoji faces across three platforms and two version updates. We then compared those codes to the proposed human facial expression prototype for the emotion perceived within the emoji face. Overall, emoji faces across platforms and versions were not anatomically equivalent. Moreover, the majority of emoji faces did not conform to human facial expressions for an emotion, although the basic anatomical codes were shared among human and emoji faces. Some emotion categories were better predicted by the assortment of anatomical codes than others, with some individual differences among platforms. We discuss theories of emotion that help explain how emoji faces are perceived as an emotion, even when anatomical differences are not always consistent or specific to an emotion.


Author(s):  
Hyunwoong Ko ◽  
Kisun Kim ◽  
Minju Bae ◽  
Myo-Geong Seo ◽  
Gieun Nam ◽  
...  

The ability to express and recognize emotion via facial expressions is well known to change with age. The present study investigated the differences in the facial recognition and facial expression of the elderly (n = 57) and the young (n = 115) and measure how each group uses different facial muscles for each emotion with Facial Action Coding System (FACS). In facial recognition task, the elderly did not recognize facial expressions better than young people and reported stronger feelings of fear and sad from photographs. In making facial expression task, the elderly rated all their facial expressions as stronger than the younger, but in fact, they expressed strong expressions in fear and anger. Furthermore, the elderly used more muscles in the lower face when making facial expressions than younger people. These results help to understand better how the facial recognition and expression of the elderly change, and show that the elderly do not effectively execute the top-down processing concerning facial expression.


2021 ◽  
Author(s):  
◽  
Wee Kiat Tay

<p>Emotion analytics is the study of human behavior by analyzing the responses when humans experience different emotions. In this thesis, we research into emotion analytics solutions using computer vision to detect emotions from facial expressions automatically using live video.  Considering anxiety is an emotion that can lead to more serious conditions like anxiety disorders and depression, we propose 2 hypotheses to detect anxiety from facial expressions. One hypothesis is that the complex emotion “anxiety” is a subset of the basic emotion “fear”. The other hypothesis is that anxiety can be distinguished from fear by differences in head and eye motion.  We test the first hypothesis by implementing a basic emotions detector based on facial action coding system (FACS) to detect fear from videos of anxious faces. When we discover that this is not as accurate as we would like, an alternative solution based on Gabor filters is implemented. A comparison is done between the solutions and the Gabor-based solution is found to be inferior.  The second hypothesis is tested by using scatter graphs and statistical analysis of the head and eye motions of videos for fear and anxiety expressions. It is found that head pitch has significant differences between fear and anxiety.  As a conclusion to the thesis, we implement a systems software using the basic emotions detector based on FACS and evaluate the software by comparing commercials using emotions detected from facial expressions of viewers.</p>


Sign in / Sign up

Export Citation Format

Share Document