scholarly journals OPTIMIZED ACTION UNITS FEATURES FOR EFFICIENT DESIGN OF DECEPTION DETECTION SYSTEM

2021 ◽  
Vol 1 (1) ◽  
pp. 104-111
Author(s):  
Shaimaa H. Abd ◽  
Ivan A. Hashim ◽  
Ali S. Jalal

Deception detection is becoming an interesting filed in different areas related to security, criminal investigation, law enforcement and terrorism detection. Recently non-verbal features have become essential features for deception detection process. One of the most important kind of these features is facial expression. The importance of these expressions come from the idea that Human face contain different expressions each of which is directly related to a certain state. In this research paper, facial expressions' data are collected for 102 participants (25 women and 77 men) as video clips. There are 504 clips for lie response and 384 for truth response (total 888 video clips). Facial expressions in a form of Action Units (AUs) are extracted for each frame with video clip. The AUs are encoded based on Facial Action Coding System (FACS) which are 18 AUs. These are: AU 1, 2, 4, 5, 6, 7, 9, 10, 12, 14, 15, 17, 20, 23, 25, 26, 28 and 45. Based on the collected data, only six AUs are the most effective and have a direct impact on the discrimination process between liar and truth teller. These AUs are AU 6, 7, 10, 12, 14 and 28

2018 ◽  
Vol 7 (3.20) ◽  
pp. 284
Author(s):  
Hamimah Ujir ◽  
Irwandi Hipiny ◽  
D N.F. Awang Iskandar

Most works in quantifying facial deformation are based on action units (AUs) provided by the Facial Action Coding System (FACS) which describes facial expressions in terms of forty-six component movements. AU corresponds to the movements of individual facial muscles. This paper presents a rule based approach to classify the AU which depends on certain facial features. This work only covers deformation of facial features based on posed Happy and the Sad expression obtained from the BU-4DFE database. Different studies refer to different combination of AUs that form Happy and Sad expression. According to the FACS rules lined in this work, an AU has more than one facial property that need to be observed. The intensity comparison and analysis on the AUs involved in Sad and Happy expression are presented. Additionally, dynamic analysis for AUs is studied to determine the temporal segment of expressions, i.e. duration of onset, apex and offset time. Our findings show that AU15, for sad expression, and AU12, for happy expression, show facial features deformation consistency for all properties during the expression period. However for AU1 and AU4, their properties’ intensity is different during the expression period. 


2018 ◽  
Vol 4 (10) ◽  
pp. 119 ◽  
Author(s):  
Adrian Davison ◽  
Walied Merghani ◽  
Moi Yap

Micro-expressions are brief spontaneous facial expressions that appear on a face when a person conceals an emotion, making them different to normal facial expressions in subtlety and duration. Currently, emotion classes within the CASME II dataset (Chinese Academy of Sciences Micro-expression II) are based on Action Units and self-reports, creating conflicts during machine learning training. We will show that classifying expressions using Action Units, instead of predicted emotion, removes the potential bias of human reporting. The proposed classes are tested using LBP-TOP (Local Binary Patterns from Three Orthogonal Planes), HOOF (Histograms of Oriented Optical Flow) and HOG 3D (3D Histogram of Oriented Gradient) feature descriptors. The experiments are evaluated on two benchmark FACS (Facial Action Coding System) coded datasets: CASME II and SAMM (A Spontaneous Micro-Facial Movement). The best result achieves 86.35% accuracy when classifying the proposed 5 classes on CASME II using HOG 3D, outperforming the result of the state-of-the-art 5-class emotional-based classification in CASME II. Results indicate that classification based on Action Units provides an objective method to improve micro-expression recognition.


PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0245117
Author(s):  
Catia Correia-Caeiro ◽  
Kathryn Holmes ◽  
Takako Miyabe-Nishiwaki

Facial expressions are complex and subtle signals, central for communication and emotion in social mammals. Traditionally, facial expressions have been classified as a whole, disregarding small but relevant differences in displays. Even with the same morphological configuration different information can be conveyed depending on the species. Due to a hardwired processing of faces in the human brain, humans are quick to attribute emotion, but have difficulty in registering facial movement units. The well-known human FACS (Facial Action Coding System) is the gold standard for objectively measuring facial expressions, and can be adapted through anatomical investigation and functional homologies for cross-species systematic comparisons. Here we aimed at developing a FACS for Japanese macaques, following established FACS methodology: first, we considered the species’ muscular facial plan; second, we ascertained functional homologies with other primate species; and finally, we categorised each independent facial movement into Action Units (AUs). Due to similarities in the rhesus and Japanese macaques’ facial musculature, the MaqFACS (previously developed for rhesus macaques) was used as a basis to extend the FACS tool to Japanese macaques, while highlighting the morphological and appearance changes differences between the two species. We documented 19 AUs, 15 Action Descriptors (ADs) and 3 Ear Action Units (EAUs) in Japanese macaques, with all movements of MaqFACS found in Japanese macaques. New movements were also observed, indicating a slightly larger repertoire than in rhesus or Barbary macaques. Our work reported here of the MaqFACS extension for Japanese macaques, when used together with the MaqFACS, comprises a valuable objective tool for the systematic and standardised analysis of facial expressions in Japanese macaques. The MaqFACS extension for Japanese macaques will now allow the investigation of the evolution of communication and emotion in primates, as well as contribute to improving the welfare of individuals, particularly in captivity and laboratory settings.


2015 ◽  
Vol 738-739 ◽  
pp. 666-669
Author(s):  
Yao Feng Xue ◽  
Hua Li Sun ◽  
Ye Duan

The Candide face model and the Face Action Coding System (FACS) are introduced in the paper. The relations of the positions of feature points of Candide-3 model and the action units of FACS are studied. The application system for computing the facial expressions of students in the experiment teaching process is developed. The feasibility of the application system is demonstrated.


2021 ◽  
Author(s):  
Xunbing Shen ◽  
Gaojie Fan ◽  
Caoyuan Niu ◽  
Zhencai Chen

AbstractThe leakage theory in the field of deception detection predicted that liars could not repress the leaked felt emotions (e.g., the fear or delight); and people who were lying would feel fear (to be discovered), especially under the high-stake situations. Therefore, we assumed that the aim of revealing deceits could be reached via analyzing the facial expression of fear. Detecting and analyzing the subtle leaked fear facial expressions is a challenging task for laypeople. It is, however, a relatively easy job for computer vision and machine learning. To test the hypothesis, we analyzed video clips from a game show “The moment of truth” by using OpenFace (for outputting the Action Units of fear and face landmarks) and WEKA (for classifying the video clips in which the players was lying or telling the truth). The results showed that some algorithms could achieve an accuracy of greater than 80% merely using AUs of fear. Besides, the total durations of AU 20 of fear were found to be shorter under the lying condition than under the truth-telling condition. Further analysis found the cause why durations of fear were shorter was that the duration from peak to offset of AU20 under the lying condition was less than that under the truth-telling condition. The results also showed that the facial movements around the eyes were more asymmetrical while people telling lies. All the results suggested that there do exist facial clues to deception, and fear could be a cue for distinguishing liars from truth-tellers.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4222
Author(s):  
Shushi Namba ◽  
Wataru Sato ◽  
Masaki Osumi ◽  
Koh Shimokawa

In the field of affective computing, achieving accurate automatic detection of facial movements is an important issue, and great progress has already been made. However, a systematic evaluation of systems that now have access to the dynamic facial database remains an unmet need. This study compared the performance of three systems (FaceReader, OpenFace, AFARtoolbox) that detect each facial movement corresponding to an action unit (AU) derived from the Facial Action Coding System. All machines could detect the presence of AUs from the dynamic facial database at a level above chance. Moreover, OpenFace and AFAR provided higher area under the receiver operating characteristic curve values compared to FaceReader. In addition, several confusion biases of facial components (e.g., AU12 and AU14) were observed to be related to each automated AU detection system and the static mode was superior to dynamic mode for analyzing the posed facial database. These findings demonstrate the features of prediction patterns for each system and provide guidance for research on facial expressions.


2019 ◽  
Vol 63 (4) ◽  
pp. 689-712
Author(s):  
K. Rothermich ◽  
O. Caivano ◽  
L.J. Knoll ◽  
V. Talwar

Interpreting other people’s intentions during communication represents a remarkable challenge for children. Although many studies have examined children’s understanding of, for example, sarcasm, less is known about their interpretation. Using realistic audiovisual scenes, we invited 124 children between 8 and 12 years old to watch video clips of young adults using different speaker intentions. After watching each video clip, children answered questions about the characters and their beliefs, and the perceived friendliness of the speaker. Children’s responses reveal age and gender differences in the ability to interpret speaker belief and social intentions, especially for scenarios conveying teasing and prosocial lies. We found that the ability to infer speaker belief of prosocial lies and to interpret social intentions increases with age. Our results suggest that children at the age of 8 years already show adult-like abilities to understand literal statements, whereas the ability to infer specific social intentions, such as teasing and prosocial lies, is still developing between the age of 8 and 12 years. Moreover, girls performed better in classifying prosocial lies and sarcasm as insincere than boys. The outcomes expand our understanding of how children observe speaker intentions and suggest further research into the development of teasing and prosocial lie interpretation.


2010 ◽  
Vol 35 (1) ◽  
pp. 1-16 ◽  
Author(s):  
Etienne B. Roesch ◽  
Lucas Tamarit ◽  
Lionel Reveret ◽  
Didier Grandjean ◽  
David Sander ◽  
...  

2021 ◽  
pp. 174702182110480
Author(s):  
Tochukwu Onwuegbusi ◽  
Frouke Hermens ◽  
Todd Hogue

Recent advances in software and hardware have allowed eye tracking to move away from static images to more ecologically relevant video streams. The analysis of eye tracking data for such dynamic stimuli, however, is not without challenges. The frame-by-frame coding of regions of interest (ROIs) is labour-intensive and computer vision techniques to automatically code such ROIs are not yet mainstream, restricting the use of such stimuli. Combined with the more general problem of defining relevant ROIs for video frames, methods are needed that facilitate data analysis. Here, we present a first evaluation of an easy-to-implement data-driven method with the potential to address these issues. To test the new method, we examined the differences in eye movements of self-reported politically left- or right-wing leaning participants to video clips of left- and right-wing politicians. The results show that our method can accurately predict group membership on the basis of eye movement patterns, isolate video clips that best distinguish people on the political left–right spectrum, and reveal the section of each video clip with the largest group differences. Our methodology thereby aids the understanding of group differences in gaze behaviour, and the identification of critical stimuli for follow-up studies or for use in saccade diagnosis.


2015 ◽  
Vol 52 ◽  
pp. 601-713 ◽  
Author(s):  
Haonan Yu ◽  
N. Siddharth ◽  
Andrei Barbu ◽  
Jeffrey Mark Siskind

We present an approach to simultaneously reasoning about a video clip and an entire natural-language sentence. The compositional nature of language is exploited to construct models which represent the meanings of entire sentences composed out of the meanings of the words in those sentences mediated by a grammar that encodes the predicate-argument relations. We demonstrate that these models faithfully represent the meanings of sentences and are sensitive to how the roles played by participants (nouns), their characteristics (adjectives), the actions performed (verbs), the manner of such actions (adverbs), and changing spatial relations between participants (prepositions) affect the meaning of a sentence and how it is grounded in video. We exploit this methodology in three ways. In the first, a video clip along with a sentence are taken as input and the participants in the event described by the sentence are highlighted, even when the clip depicts multiple similar simultaneous events. In the second, a video clip is taken as input without a sentence and a sentence is generated that describes an event in that clip. In the third, a corpus of video clips is paired with sentences which describe some of the events in those clips and the meanings of the words in those sentences are learned. We learn these meanings without needing to specify which attribute of the video clips each word in a given sentence refers to. The learned meaning representations are shown to be intelligible to humans.


Sign in / Sign up

Export Citation Format

Share Document