scholarly journals 53 Computer Vision, Facial Expressivity and Schizophrenia: A Review

CNS Spectrums ◽  
2019 ◽  
Vol 24 (1) ◽  
pp. 204-205
Author(s):  
Mina Boazak ◽  
Robert Cotes

AbstractIntroductionFacial expressivity in schizophrenia has been a topic of clinical interest for the past century. Besides the schizophrenia sufferers difficulty decoding the facial expressions of others, they often have difficulty encoding facial expressions. Traditionally, evaluations of facial expressions have been conducted by trained human observers using the facial action coding system. The process was slow and subject to intra and inter-observer variability. In the past decade the traditional facial action coding system developed by Ekman has been adapted for use in affective computing. Here we assess the applications of this adaptation for schizophrenia, the findings of current groups, and the future role of this technology.Materials and MethodsWe review the applications of computer vision technology in schizophrenia using pubmed and google scholar search criteria of “computer vision” AND “Schizophrenia” from January of 2010 to June of 2018.ResultsFive articles were selected for inclusion representing 1 case series and 4 case-control analysis. Authors assessed variations in facial action unit presence, intensity, various measures of length of activation, action unit clustering, congruence, and appropriateness. Findings point to variations in each of these areas, except action unit appropriateness, between control and schizophrenia patients. Computer vision techniques were also demonstrated to have high accuracy in classifying schizophrenia from control patients, reaching an AUC just under 0.9 in one study, and to predict psychometric scores, reaching pearson’s correlation values of under 0.7.DiscussionOur review of the literature demonstrates agreement in findings of traditional and contemporary assessment techniques of facial expressivity in schizophrenia. Our findings also demonstrate that current computer vision techniques have achieved capacity to differentiate schizophrenia from control populations and to predict psychometric scores. Nevertheless, the predictive accuracy of these technologies leaves room for growth. On analysis our group found two modifiable areas that may contribute to improving algorithm accuracy: assessment protocol and feature inclusion. Based on our review we recommend assessment of facial expressivity during a period of silence in addition to an assessment during a clinically structured interview utilizing emotionally evocative questions. Furthermore, where underfit is a problem we recommend progressive inclusion of features including action unit activation, intensity, action unit rate of onset and offset, clustering (including richness, distribution, and typicality), and congruence. Inclusion of each of these features may improve algorithm predictive accuracy.ConclusionWe review current applications of computer vision in the assessment of facial expressions in schizophrenia. We present the results of current innovative works in the field and discuss areas for continued development.

1995 ◽  
Vol 7 (4) ◽  
pp. 527-534 ◽  
Author(s):  
Kenneth Asplund ◽  
Lilian Jansson ◽  
Astrid Norberg

Two methods of interpreting the videotaped facial expressions of four patients with severe dementia of the Alzheimer type were compared. Interpretations of facial expressions performed by means of unstructured naturalistic judgements revealed episodes when the four patients exhibited anger, disgust, happiness, sadness, and surprise. When these episodes were assessed by use of modified version of the Facial Action Coding System, there was, in total, 48% agreement between the two methods. The highest agreement, 98%, occurred for happiness shown by one patient. It was concluded that more emotions could be judged by means of the unstructured naturalistic method, which is based on an awareness of the total situation that facilitates imputing meaning into the patients' cues. It is a difficult task to find a balance between imputing too much meaning into the severely demented patients' sparse and unclear cues and ignoring the possibility that there is some meaning to be interpreted.


2021 ◽  
Vol 7 (1) ◽  
pp. 13-24
Author(s):  
Matahari Bhakti Nendya ◽  
Lailatul Husniah ◽  
Hardianto Wibowo ◽  
Eko Mulyanto Yuniarno

Ekspresi wajah pada karakter virtual 3D memegang penran penting dalam pembuatan sebuah film animasi. Untuk mendapatkan ekspresi wajah yang diinginkan seorang animator kadang mengalami kesulitan dan membutuhkan waktu yang tidak sedikit. Penelitian ini dilakukan untuk mendapatkan ekspresi wajah dengan menggabungkan beberapa Action Unit yang ada pada FACS dan diimplementasikan pada wajah karakter virtual 3D. Action Unit pada FACS dipilih karena mengacu pada struktur otot wajah manusia. Eksperimen yang dilakukan menghasilkan komninasi Action Unit yang dapat membentuk ekspresi seperti joy expression yang dihasilkan dari kombinasi AU 12+26, dan surprise expression yang dihasilkan dari kombinasi AU -4+5+26. Sedangkan untuk sadness expression dan disgust expression karena ada AU yang tidak terwakili pada model 3D sehingga di dapatkan hasil ekspresi yang kurang maksimal.


2015 ◽  
Vol 3 (1) ◽  
Author(s):  
Friska G. Batoteng ◽  
Taufiq F. Pasiak ◽  
Shane H. R. Ticoalu

Abstract: Facial expression recognition is one way to recognize emotions which has not received much attention. Muscles that form facial expressions known as musculli facial, muscles that move the face and form human facial expressions: happy, sad, angry, fearful, disgusted and surprised which are the six basic expressions of human emotion. Human facial expressions can be measured using FACS (Facial Action Coding System). This study aims to determine the facial muscles which most frequently used and most rarely used, and determine the emotion expression of Jokowi, a presidential candidate, through assessment of the facial muscles using FACS. This study is a retrospective descriptive study. The research samples are the whole photo of Jokowi’s facial expression at first presidential debate in 2014, about 30 photos. Samples were taken from a video debate and confirmed to be a photo using Jokowi’s facial expressions which then further analyzed using FACS. The research showed that the most used action units and facial muscle is AU 1 whose work on frontal muscle pars medialis (14.75%). The least appear muscles on Jokowi’s facial expressions were musculus orbicularis oculi, pars palpebralis and AU 24 musculus obicularis oris (0.82%). The dominant facial expressions was seen in Jokowi was sad facial expression (36.67%).Keywords: musculi facialis, facial expression, expression of emotion, FACSAbstrak: Pengenalan ekspresi wajah adalah salah satu cara untuk mengenali emosi yang belum banyak diperhatikan. Otot-otot yang membentuk ekspresi wajah yaitu musculli facialis yang merupakan otot-otot penggerak wajah dan membentuk ekspresi – ekspresi wajah manusia yaitu bahagia, sedih, marah, takut, jijik dan terkejut yang merupakan 6 dasar ekspresi emosi manusia. Ekspresi wajah manusia dapat diukur dengan menggunakan parameter FACS (Facial Action Coding System). Penelitian ini bertujuan untuk mengetahui musculi facialis yang paling sering digunakan dan yang paling jarang digunakan, serta untuk menentukan ekspresi emosi calon presiden Jokowi. Desain penelitian ini yaitu penelitian deskriptif dengan retrospektif. Sampel penelitian ialah seluruh foto ekspresi wajah Jokowi saat debat calon presiden pertama tahun 2014 sebanyak 30 foto. Sampel diambil dari video debat dan dikonfirmasi menjadi foto kemudian dianalisis lebih lanjut menggunakan FACS. Penelitian ini didapatkan hasil bahwa Musculi yang paling banyak digerakkan, yaitu Musculi frontalis pars medialis (14,75%). Musculi yang paling sedikit muncul pada ekspresi wajah Jokowi yaitu musculus orbicularis oculi, pars palpebralis dan musculus obicularis oris (0,82%). Ekspresi wajah yang dominan dinampakkan oleh Jokowi merupakan ekspresi wajah sedih (36,67%).Kata kunci: musculi facialis, ekspresi wajah, ekspresi emosi, FACS


1983 ◽  
Vol 6 (4) ◽  
pp. 427-440 ◽  
Author(s):  
Michiel Wiggers ◽  
Henk Willems

Several conceptualizations about the interdependency of three empathy responses during the empathic arousal process in children were contrasted, i.e., understanding, sharing, and facially expressing another's affect (cognitive, affective, and facial empathy, respectively). Videotaped episodes in which actors portrayed happiness, fear, sadness, and anger were presented to five-year-old girls in three conditions, 16 girls in each condition. In a first condition, emotions were conveyed by the actor's facial expressions, in a second condition by situational events, and in a third condition by both situational events and facial expressions. Girls' affective and cognitive empathy was assessed by asking questions about girls' own feelings and the actor's feelings, respectively. Girls' facial empathy was measured with Ekman and Friesen's Facial Action Coding System. Results corroborated an empathy conceptualization in which affective and facial empathy are mediated by cognitive empathy. An empathy conceptualization in which cognitive and affective empathy are aroused by afferent feedback of involuntary facial empathy was not supported by the results.


2007 ◽  
Vol 31 (1) ◽  
pp. 1-11 ◽  
Author(s):  
Victoria Talwar ◽  
Susan M. Murphy ◽  
Kang Lee

Prosocial lie-telling behavior in children between 3 and 11 years of age was examined using an undesirable gift paradigm. In the first condition, children received an undesirable gift and were questioned by the gift-giver about whether they liked the gift. In the second condition, children were also given an undesirable gift but received parental encouragement to tell a white lie prior to being questioned by the gift-giver. In the third condition, the child's parent received an undesirable gift and the child was encouraged to lie on behalf of their parent. In all conditions, the majority of children told a white lie and this tendency increased with age. Coding of children's facial expressions using Ekman and Friesen's (1978) Facial Action Coding System revealed significant but small differences between lie-tellers and control children in terms of both positive and negative facial expressions. Detailed parental instruction facilitated children's display of appropriate verbal and nonverbal expressive behaviors when they received an undesirable gift.


2021 ◽  
Author(s):  
Anna Morozov ◽  
Lisa Parr ◽  
Katalin Gothard ◽  
Rony Paz ◽  
Raviv Pryluk

AbstractInternal affective states produce external manifestations such as facial expressions. In humans, the Facial Action Coding System (FACS) is widely used to objectively quantify the elemental facial action-units (AUs) that build complex facial expressions. A similar system has been developed for macaque monkeys - the Macaque Facial Action Coding System (MaqFACS); yet unlike the human counterpart, which is already partially replaced by automatic algorithms, this system still requires labor-intensive coding. Here, we developed and implemented the first prototype for automatic MaqFACS coding. We applied the approach to the analysis of behavioral and neural data recorded from freely interacting macaque monkeys. The method achieved high performance in recognition of six dominant AUs, generalizing between conspecific individuals (Macaca mulatta) and even between species (Macaca fascicularis). The study lays the foundation for fully automated detection of facial expressions in animals, which is crucial for investigating the neural substrates of social and affective states.


Sign in / Sign up

Export Citation Format

Share Document