scholarly journals Catching a liar through facial expression of fear

2021 ◽  
Author(s):  
Xunbing Shen ◽  
Gaojie Fan ◽  
Caoyuan Niu ◽  
Zhencai Chen

AbstractThe leakage theory in the field of deception detection predicted that liars could not repress the leaked felt emotions (e.g., the fear or delight); and people who were lying would feel fear (to be discovered), especially under the high-stake situations. Therefore, we assumed that the aim of revealing deceits could be reached via analyzing the facial expression of fear. Detecting and analyzing the subtle leaked fear facial expressions is a challenging task for laypeople. It is, however, a relatively easy job for computer vision and machine learning. To test the hypothesis, we analyzed video clips from a game show “The moment of truth” by using OpenFace (for outputting the Action Units of fear and face landmarks) and WEKA (for classifying the video clips in which the players was lying or telling the truth). The results showed that some algorithms could achieve an accuracy of greater than 80% merely using AUs of fear. Besides, the total durations of AU 20 of fear were found to be shorter under the lying condition than under the truth-telling condition. Further analysis found the cause why durations of fear were shorter was that the duration from peak to offset of AU20 under the lying condition was less than that under the truth-telling condition. The results also showed that the facial movements around the eyes were more asymmetrical while people telling lies. All the results suggested that there do exist facial clues to deception, and fear could be a cue for distinguishing liars from truth-tellers.

2021 ◽  
Vol 12 ◽  
Author(s):  
Xunbing Shen ◽  
Gaojie Fan ◽  
Caoyuan Niu ◽  
Zhencai Chen

High stakes can be stressful whether one is telling the truth or lying. However, liars can feel extra fear from worrying to be discovered than truth-tellers, and according to the “leakage theory,” the fear is almost impossible to be repressed. Therefore, we assumed that analyzing the facial expression of fear could reveal deceits. Detecting and analyzing the subtle leaked fear facial expressions is a challenging task for laypeople. It is, however, a relatively easy job for computer vision and machine learning. To test the hypothesis, we analyzed video clips from a game show “The moment of truth” by using OpenFace (for outputting the Action Units (AUs) of fear and face landmarks) and WEKA (for classifying the video clips in which the players were lying or telling the truth). The results showed that some algorithms achieved an accuracy of >80% merely using AUs of fear. Besides, the total duration of AU20 of fear was found to be shorter under the lying condition than that from the truth-telling condition. Further analysis found that the reason for a shorter duration in the lying condition was that the time window from peak to offset of AU20 under the lying condition was less than that under the truth-telling condition. The results also showed that facial movements around the eyes were more asymmetrical when people are telling lies. All the results suggested that facial clues can be used to detect deception, and fear could be a cue for distinguishing liars from truth-tellers.


2021 ◽  
Vol 11 (4) ◽  
pp. 1428
Author(s):  
Haopeng Wu ◽  
Zhiying Lu ◽  
Jianfeng Zhang ◽  
Xin Li ◽  
Mingyue Zhao ◽  
...  

This paper addresses the problem of Facial Expression Recognition (FER), focusing on unobvious facial movements. Traditional methods often cause overfitting problems or incomplete information due to insufficient data and manual selection of features. Instead, our proposed network, which is called the Multi-features Cooperative Deep Convolutional Network (MC-DCN), maintains focus on the overall feature of the face and the trend of key parts. The processing of video data is the first stage. The method of ensemble of regression trees (ERT) is used to obtain the overall contour of the face. Then, the attention model is used to pick up the parts of face that are more susceptible to expressions. Under the combined effect of these two methods, the image which can be called a local feature map is obtained. After that, the video data are sent to MC-DCN, containing parallel sub-networks. While the overall spatiotemporal characteristics of facial expressions are obtained through the sequence of images, the selection of keys parts can better learn the changes in facial expressions brought about by subtle facial movements. By combining local features and global features, the proposed method can acquire more information, leading to better performance. The experimental results show that MC-DCN can achieve recognition rates of 95%, 78.6% and 78.3% on the three datasets SAVEE, MMI, and edited GEMEP, respectively.


2021 ◽  
Author(s):  
Harisu Abdullahi Shehu ◽  
William Browne ◽  
Hedwig Eisenbarth

Emotion categorization can be the process of identifying different emotions in humans based on their facial expressions. It requires time and sometimes it is hard for human classifiers to agree with each other about an emotion category of a facial expression. However, machine learning classifiers have done well in classifying different emotions and have widely been used in recent years to facilitate the task of emotion categorization. Much research on emotion video databases uses a few frames from when emotion is expressed at peak to classify emotion, which might not give a good classification accuracy when predicting frames where the emotion is less intense. In this paper, using the CK+ emotion dataset as an example, we use more frames to analyze emotion from mid and peak frame images and compared our results to a method using fewer peak frames. Furthermore, we propose an approach based on sequential voting and apply it to more frames of the CK+ database. Our approach resulted in up to 85.9% accuracy for the mid frames and overall accuracy of 96.5% for the CK+ database compared with the accuracy of 73.4% and 93.8% from existing techniques.


1996 ◽  
Vol 2 (5) ◽  
pp. 383-391 ◽  
Author(s):  
Marcia C. Smith ◽  
Melissa K. Smith ◽  
Heiner Ellgring

AbstractSpontaneous and posed emotional facial expressions in individuals with Parkinson's disease (PD, n – 12) were compared with those of healthy age-matched controls (n = 12). The intensity and amount of facial expression in PD patients were expected to be reduced for spontaneous but not posed expressions. Emotional stimuli were video clips selected from films, 2–5 min in duration, designed to elicit feelings of happiness, sadness, fear, disgust, or anger. Facial movements were coded using Ekman and Friesen's (1978) Facial Action Coding System (FACS). In addition, participants rated their emotional experience on 9-point Likert scales. The PD group showed significantly less overall facial reactivity than did controls when viewing the films. The predicted Group X Condition (spontaneous vs. posed) interaction effect on smile intensity was found when PD participants with more severe disease were compared with those with milder disease and with controls. In contrast, ratings of emotional experience were similar for both groups. Depression was positively associated with emotion ratings, but not with measures of facial activity. Spontaneous facial expression appears to be selectively affected in PD, whereas posed expression and emotional experience remain relatively intact. (JINS, 1996, 2, 383–391.)


2019 ◽  
Vol 9 (21) ◽  
pp. 4542 ◽  
Author(s):  
Marco Leo ◽  
Pierluigi Carcagnì ◽  
Cosimo Distante ◽  
Pier Luigi Mazzeo ◽  
Paolo Spagnolo ◽  
...  

The computational analysis of facial expressions is an emerging research topic that could overcome the limitations of human perception and get quick and objective outcomes in the assessment of neurodevelopmental disorders (e.g., Autism Spectrum Disorders, ASD). Unfortunately, there have been only a few attempts to quantify facial expression production and most of the scientific literature aims at the easier task of recognizing if either a facial expression is present or not. Some attempts to face this challenging task exist but they do not provide a comprehensive study based on the comparison between human and automatic outcomes in quantifying children’s ability to produce basic emotions. Furthermore, these works do not exploit the latest solutions in computer vision and machine learning. Finally, they generally focus only on a homogeneous (in terms of cognitive capabilities) group of individuals. To fill this gap, in this paper some advanced computer vision and machine learning strategies are integrated into a framework aimed to computationally analyze how both ASD and typically developing children produce facial expressions. The framework locates and tracks a number of landmarks (virtual electromyography sensors) with the aim of monitoring facial muscle movements involved in facial expression production. The output of these virtual sensors is then fused to model the individual ability to produce facial expressions. Gathered computational outcomes have been correlated with the evaluation provided by psychologists and evidence has been given that shows how the proposed framework could be effectively exploited to deeply analyze the emotional competence of ASD children to produce facial expressions.


10.29007/v16j ◽  
2019 ◽  
Author(s):  
Daichi Naito ◽  
Ryo Hatano ◽  
Hiroyuki Nishiyama

Careless driving is the most common cause of traffic accidents. Being in a drowsy state is a cause of careless driving, which can lead to a serious accident. Therefore, in this study, we focus on predicting drowsy driving. Studies on the prediction of drowsy driving focus on the prediction aspect only . However, users have various demands, like not wanting to wear a device while driving, and it is necessary to consider such demands when we introduce the prediction system. Hence, our purpose is to predict drowsy driving that can respond to a user’s demand(s) by combining two approaches of electroencephalogram (EEG ) and facial expressions. Our method is divided into three parts by type of data (facial expressions, EEG, and both), and the users can select the one suitable for their demands. We acquire data with a depth camera and an electroencephalograph and make a machine-learning model to predict drowsy driving. As a result, it is possible to correctly predict drowsy driving in the order of facial expression < EEG < and both combined. Our framework may be applicable to data other than EEG and facial expressions.


Author(s):  
Ravindra Kumar ◽  

Emotions play a powerful role in people's thinking and behaviors. Emotions act as a compulsion to take any action and can influence daily life decisions. Human facial expressions show humans share the same set of emotions. From the setting, the concept of emotion-sensing facial recognition was brought up. Humans have been working actively on computer vision algorithms, the algorithm will help determine the emotions of an individual and can determine the set of intentions accompanied by the emotions. The emotion-sensing facial expression computers are designed using data-centric skills in machine learning and can achieve their desired work by emotion identification and a set of intentions related to the emotion obtained.


2021 ◽  
Author(s):  
Harisu Abdullahi Shehu ◽  
William Browne ◽  
Hedwig Eisenbarth

Emotion categorization can be the process of identifying different emotions in humans based on their facial expressions. It requires time and sometimes it is hard for human classifiers to agree with each other about an emotion category of a facial expression. However, machine learning classifiers have done well in classifying different emotions and have widely been used in recent years to facilitate the task of emotion categorization. Much research on emotion video databases uses a few frames from when emotion is expressed at peak to classify emotion, which might not give a good classification accuracy when predicting frames where the emotion is less intense. In this paper, using the CK+ emotion dataset as an example, we use more frames to analyze emotion from mid and peak frame images and compared our results to a method using fewer peak frames. Furthermore, we propose an approach based on sequential voting and apply it to more frames of the CK+ database. Our approach resulted in up to 85.9% accuracy for the mid frames and overall accuracy of 96.5% for the CK+ database compared with the accuracy of 73.4% and 93.8% from existing techniques.


2017 ◽  
Vol 2 (2) ◽  
pp. 130-134
Author(s):  
Jarot Dwi Prasetyo ◽  
Zaehol Fatah ◽  
Taufik Saleh

In recent years it appears interest in the interaction between humans and computers. Facial expressions play a fundamental role in social interaction with other humans. In two human communications is only 7% of communication due to language linguistic message, 38% due to paralanguage, while 55% through facial expressions. Therefore, to facilitate human machine interface more friendly on multimedia products, the facial expression recognition on interface very helpful in interacting comfort. One of the steps that affect the facial expression recognition is the accuracy in facial feature extraction. Several approaches to facial expression recognition in its extraction does not consider the dimensions of the data as input features of machine learning Through this research proposes a wavelet algorithm used to reduce the dimension of data features. Data features are then classified using SVM-multiclass machine learning to determine the difference of six facial expressions are anger, hatred, fear of happy, sad, and surprised Jaffe found in the database. Generating classification obtained 81.42% of the 208 sample data.


2015 ◽  
Vol 282 (1799) ◽  
pp. 20142288 ◽  
Author(s):  
Evelyne Lepron ◽  
Michaël Causse ◽  
Chlöé Farrer

Being held responsible for our actions strongly determines our moral judgements and decisions. This study examined whether responsibility also influences our affective reaction to others' emotions. We conducted two experiments in order to assess the effect of responsibility and of a sense of agency (the conscious feeling of controlling an action) on the empathic response to pain. In both experiments, participants were presented with video clips showing an actor's facial expression of pain of varying intensity. The empathic response was assessed with behavioural (pain intensity estimation from facial expressions and unpleasantness for the observer ratings) and electrophysiological measures (facial electromyography). Experiment 1 showed enhanced empathic response (increased unpleasantness for the observer and facial electromyography responses) as participants' degree of responsibility for the actor's pain increased. This effect was mainly accounted for by the decisional component of responsibility (compared with the execution component). In addition, experiment 2 found that participants' unpleasantness rating also increased when they had a sense of agency over the pain, while controlling for decision and execution processes. The findings suggest that increased empathy induced by responsibility and a sense of agency may play a role in regulating our moral conduct.


Sign in / Sign up

Export Citation Format

Share Document