scholarly journals GAMBARAN MUSCULI FACIALIS PADA EKSPRESI WAJAH DAN EMOSI DENGAN MENGGUNAKAN FACIAL ACTION CODING SYSTEM PADA CALON PRESIDEN JOKOWI

2015 ◽  
Vol 3 (1) ◽  
Author(s):  
Friska G. Batoteng ◽  
Taufiq F. Pasiak ◽  
Shane H. R. Ticoalu

Abstract: Facial expression recognition is one way to recognize emotions which has not received much attention. Muscles that form facial expressions known as musculli facial, muscles that move the face and form human facial expressions: happy, sad, angry, fearful, disgusted and surprised which are the six basic expressions of human emotion. Human facial expressions can be measured using FACS (Facial Action Coding System). This study aims to determine the facial muscles which most frequently used and most rarely used, and determine the emotion expression of Jokowi, a presidential candidate, through assessment of the facial muscles using FACS. This study is a retrospective descriptive study. The research samples are the whole photo of Jokowi’s facial expression at first presidential debate in 2014, about 30 photos. Samples were taken from a video debate and confirmed to be a photo using Jokowi’s facial expressions which then further analyzed using FACS. The research showed that the most used action units and facial muscle is AU 1 whose work on frontal muscle pars medialis (14.75%). The least appear muscles on Jokowi’s facial expressions were musculus orbicularis oculi, pars palpebralis and AU 24 musculus obicularis oris (0.82%). The dominant facial expressions was seen in Jokowi was sad facial expression (36.67%).Keywords: musculi facialis, facial expression, expression of emotion, FACSAbstrak: Pengenalan ekspresi wajah adalah salah satu cara untuk mengenali emosi yang belum banyak diperhatikan. Otot-otot yang membentuk ekspresi wajah yaitu musculli facialis yang merupakan otot-otot penggerak wajah dan membentuk ekspresi – ekspresi wajah manusia yaitu bahagia, sedih, marah, takut, jijik dan terkejut yang merupakan 6 dasar ekspresi emosi manusia. Ekspresi wajah manusia dapat diukur dengan menggunakan parameter FACS (Facial Action Coding System). Penelitian ini bertujuan untuk mengetahui musculi facialis yang paling sering digunakan dan yang paling jarang digunakan, serta untuk menentukan ekspresi emosi calon presiden Jokowi. Desain penelitian ini yaitu penelitian deskriptif dengan retrospektif. Sampel penelitian ialah seluruh foto ekspresi wajah Jokowi saat debat calon presiden pertama tahun 2014 sebanyak 30 foto. Sampel diambil dari video debat dan dikonfirmasi menjadi foto kemudian dianalisis lebih lanjut menggunakan FACS. Penelitian ini didapatkan hasil bahwa Musculi yang paling banyak digerakkan, yaitu Musculi frontalis pars medialis (14,75%). Musculi yang paling sedikit muncul pada ekspresi wajah Jokowi yaitu musculus orbicularis oculi, pars palpebralis dan musculus obicularis oris (0,82%). Ekspresi wajah yang dominan dinampakkan oleh Jokowi merupakan ekspresi wajah sedih (36,67%).Kata kunci: musculi facialis, ekspresi wajah, ekspresi emosi, FACS

1996 ◽  
Vol 83 (1) ◽  
pp. 263-274 ◽  
Author(s):  
Andrea Clarici ◽  
Francesca Melon ◽  
Susanne Braun ◽  
Antonio Bava

The asymmetries of facial expression were estimated in a sample of 14 experimental subjects with the Facial Action Coding System during voluntary control of facial mimicry while viewing videotapes. The subjects were instructed to express facially the emotion experienced or to dissimulate their true emotion with a facial expression opposite (incongruous) to what they actually felt. Only during dissimulation did facial mimicry show an asymmetric distribution toward the lower left side of the face.


2018 ◽  
Vol 9 (2) ◽  
pp. 31-38
Author(s):  
Fransisca Adis ◽  
Yohanes Merci Widiastomo

Facial expression is one of some aspects that can deliver story and character’s emotion in 3D animation. To achieve that, we need to plan the character facial from very beginning of the production. At early stage, the character designer need to think about the expression after theu done the character design. Rigger need to create a flexible rigging to achieve the design. Animator can get the clear picture how they animate the facial. Facial Action Coding System (FACS) that originally developed by Carl-Herman Hjortsjo and adopted by Paul Ekman and Wallace V. can be used to identify emotion in a person generally. This paper is going to explain how the Writer use FACS to help designing the facial expression in 3D characters. FACS will be used to determine the basic characteristic of basic shapes of the face when show emotions, while compare with actual face reference. Keywords: animation, facial expression, non-dialog


2011 ◽  
pp. 255-317 ◽  
Author(s):  
Daijin Kim ◽  
Jaewon Sung

The facial expression has long been an interest for psychology, since Darwin published The expression of Emotions in Man and Animals (Darwin, C., 1899). Psychologists have studied to reveal the role and mechanism of the facial expression. One of the great discoveries of Darwin is that there exist prototypical facial expressions across multiple cultures on the earth, which provided the theoretical backgrounds for the vision researchers who tried to classify categories of the prototypical facial expressions from images. The representative 6 facial expressions are afraid, happy, sad, surprised, angry, and disgust (Mase, 1991; Yacoob and Davis, 1994). On the other hand, real facial expressions that we frequently meet in daily life consist of lots of distinct signals, which are subtly different. Further research on facial expressions required an object method to describe and measure the distinct activity of facial muscles. The facial action coding system (FACS), proposed by Hager and Ekman (1978), defines 46 distinct action units (AUs), each of which explains the activity of each distinct muscle or muscle group. The development of the objective description method also affected the vision researchers, who tried to detect the emergence of each AU (Tian et. al., 2001).


2020 ◽  
pp. 59-69
Author(s):  
Walid Mahmod ◽  
Jane Stephan ◽  
Anmar Razzak

Automatic analysis of facial expressions is rapidly becoming an area of intense interest in computer vision and artificial intelligence research communities. In this paper an approach is presented for facial expression recognition of the six basic prototype expressions (i.e., joy, surprise, anger, sadness, fear, and disgust) based on Facial Action Coding System (FACS). The approach is attempting to utilize a combination of different transforms (Walid let hybrid transform); they consist of Fast Fourier Transform; Radon transform and Multiwavelet transform for the feature extraction. Korhonen Self Organizing Feature Map (SOFM) then used for patterns clustering based on the features obtained from the hybrid transform above. The result shows that the method has very good accuracy in facial expression recognition. However, the proposed method has many promising features that make it interesting. The approach provides a new method of feature extraction in which overcome the problem of the illumination, faces that varies from one individual to another quite considerably due to different age, ethnicity, gender and cosmetic also it does not require a precise normalization and lighting equalization. An average clustering accuracy of 94.8% is achieved for six basic expressions, where different databases had been used for the test of the method.


1995 ◽  
Vol 7 (4) ◽  
pp. 527-534 ◽  
Author(s):  
Kenneth Asplund ◽  
Lilian Jansson ◽  
Astrid Norberg

Two methods of interpreting the videotaped facial expressions of four patients with severe dementia of the Alzheimer type were compared. Interpretations of facial expressions performed by means of unstructured naturalistic judgements revealed episodes when the four patients exhibited anger, disgust, happiness, sadness, and surprise. When these episodes were assessed by use of modified version of the Facial Action Coding System, there was, in total, 48% agreement between the two methods. The highest agreement, 98%, occurred for happiness shown by one patient. It was concluded that more emotions could be judged by means of the unstructured naturalistic method, which is based on an awareness of the total situation that facilitates imputing meaning into the patients' cues. It is a difficult task to find a balance between imputing too much meaning into the severely demented patients' sparse and unclear cues and ignoring the possibility that there is some meaning to be interpreted.


CNS Spectrums ◽  
2019 ◽  
Vol 24 (1) ◽  
pp. 204-205
Author(s):  
Mina Boazak ◽  
Robert Cotes

AbstractIntroductionFacial expressivity in schizophrenia has been a topic of clinical interest for the past century. Besides the schizophrenia sufferers difficulty decoding the facial expressions of others, they often have difficulty encoding facial expressions. Traditionally, evaluations of facial expressions have been conducted by trained human observers using the facial action coding system. The process was slow and subject to intra and inter-observer variability. In the past decade the traditional facial action coding system developed by Ekman has been adapted for use in affective computing. Here we assess the applications of this adaptation for schizophrenia, the findings of current groups, and the future role of this technology.Materials and MethodsWe review the applications of computer vision technology in schizophrenia using pubmed and google scholar search criteria of “computer vision” AND “Schizophrenia” from January of 2010 to June of 2018.ResultsFive articles were selected for inclusion representing 1 case series and 4 case-control analysis. Authors assessed variations in facial action unit presence, intensity, various measures of length of activation, action unit clustering, congruence, and appropriateness. Findings point to variations in each of these areas, except action unit appropriateness, between control and schizophrenia patients. Computer vision techniques were also demonstrated to have high accuracy in classifying schizophrenia from control patients, reaching an AUC just under 0.9 in one study, and to predict psychometric scores, reaching pearson’s correlation values of under 0.7.DiscussionOur review of the literature demonstrates agreement in findings of traditional and contemporary assessment techniques of facial expressivity in schizophrenia. Our findings also demonstrate that current computer vision techniques have achieved capacity to differentiate schizophrenia from control populations and to predict psychometric scores. Nevertheless, the predictive accuracy of these technologies leaves room for growth. On analysis our group found two modifiable areas that may contribute to improving algorithm accuracy: assessment protocol and feature inclusion. Based on our review we recommend assessment of facial expressivity during a period of silence in addition to an assessment during a clinically structured interview utilizing emotionally evocative questions. Furthermore, where underfit is a problem we recommend progressive inclusion of features including action unit activation, intensity, action unit rate of onset and offset, clustering (including richness, distribution, and typicality), and congruence. Inclusion of each of these features may improve algorithm predictive accuracy.ConclusionWe review current applications of computer vision in the assessment of facial expressions in schizophrenia. We present the results of current innovative works in the field and discuss areas for continued development.


Sign in / Sign up

Export Citation Format

Share Document