scholarly journals Catching a Liar Through Facial Expression of Fear

2021 ◽  
Vol 12 ◽  
Author(s):  
Xunbing Shen ◽  
Gaojie Fan ◽  
Caoyuan Niu ◽  
Zhencai Chen

High stakes can be stressful whether one is telling the truth or lying. However, liars can feel extra fear from worrying to be discovered than truth-tellers, and according to the “leakage theory,” the fear is almost impossible to be repressed. Therefore, we assumed that analyzing the facial expression of fear could reveal deceits. Detecting and analyzing the subtle leaked fear facial expressions is a challenging task for laypeople. It is, however, a relatively easy job for computer vision and machine learning. To test the hypothesis, we analyzed video clips from a game show “The moment of truth” by using OpenFace (for outputting the Action Units (AUs) of fear and face landmarks) and WEKA (for classifying the video clips in which the players were lying or telling the truth). The results showed that some algorithms achieved an accuracy of >80% merely using AUs of fear. Besides, the total duration of AU20 of fear was found to be shorter under the lying condition than that from the truth-telling condition. Further analysis found that the reason for a shorter duration in the lying condition was that the time window from peak to offset of AU20 under the lying condition was less than that under the truth-telling condition. The results also showed that facial movements around the eyes were more asymmetrical when people are telling lies. All the results suggested that facial clues can be used to detect deception, and fear could be a cue for distinguishing liars from truth-tellers.

2021 ◽  
Author(s):  
Xunbing Shen ◽  
Gaojie Fan ◽  
Caoyuan Niu ◽  
Zhencai Chen

AbstractThe leakage theory in the field of deception detection predicted that liars could not repress the leaked felt emotions (e.g., the fear or delight); and people who were lying would feel fear (to be discovered), especially under the high-stake situations. Therefore, we assumed that the aim of revealing deceits could be reached via analyzing the facial expression of fear. Detecting and analyzing the subtle leaked fear facial expressions is a challenging task for laypeople. It is, however, a relatively easy job for computer vision and machine learning. To test the hypothesis, we analyzed video clips from a game show “The moment of truth” by using OpenFace (for outputting the Action Units of fear and face landmarks) and WEKA (for classifying the video clips in which the players was lying or telling the truth). The results showed that some algorithms could achieve an accuracy of greater than 80% merely using AUs of fear. Besides, the total durations of AU 20 of fear were found to be shorter under the lying condition than under the truth-telling condition. Further analysis found the cause why durations of fear were shorter was that the duration from peak to offset of AU20 under the lying condition was less than that under the truth-telling condition. The results also showed that the facial movements around the eyes were more asymmetrical while people telling lies. All the results suggested that there do exist facial clues to deception, and fear could be a cue for distinguishing liars from truth-tellers.


2021 ◽  
Vol 11 (4) ◽  
pp. 1428
Author(s):  
Haopeng Wu ◽  
Zhiying Lu ◽  
Jianfeng Zhang ◽  
Xin Li ◽  
Mingyue Zhao ◽  
...  

This paper addresses the problem of Facial Expression Recognition (FER), focusing on unobvious facial movements. Traditional methods often cause overfitting problems or incomplete information due to insufficient data and manual selection of features. Instead, our proposed network, which is called the Multi-features Cooperative Deep Convolutional Network (MC-DCN), maintains focus on the overall feature of the face and the trend of key parts. The processing of video data is the first stage. The method of ensemble of regression trees (ERT) is used to obtain the overall contour of the face. Then, the attention model is used to pick up the parts of face that are more susceptible to expressions. Under the combined effect of these two methods, the image which can be called a local feature map is obtained. After that, the video data are sent to MC-DCN, containing parallel sub-networks. While the overall spatiotemporal characteristics of facial expressions are obtained through the sequence of images, the selection of keys parts can better learn the changes in facial expressions brought about by subtle facial movements. By combining local features and global features, the proposed method can acquire more information, leading to better performance. The experimental results show that MC-DCN can achieve recognition rates of 95%, 78.6% and 78.3% on the three datasets SAVEE, MMI, and edited GEMEP, respectively.


1996 ◽  
Vol 2 (5) ◽  
pp. 383-391 ◽  
Author(s):  
Marcia C. Smith ◽  
Melissa K. Smith ◽  
Heiner Ellgring

AbstractSpontaneous and posed emotional facial expressions in individuals with Parkinson's disease (PD, n – 12) were compared with those of healthy age-matched controls (n = 12). The intensity and amount of facial expression in PD patients were expected to be reduced for spontaneous but not posed expressions. Emotional stimuli were video clips selected from films, 2–5 min in duration, designed to elicit feelings of happiness, sadness, fear, disgust, or anger. Facial movements were coded using Ekman and Friesen's (1978) Facial Action Coding System (FACS). In addition, participants rated their emotional experience on 9-point Likert scales. The PD group showed significantly less overall facial reactivity than did controls when viewing the films. The predicted Group X Condition (spontaneous vs. posed) interaction effect on smile intensity was found when PD participants with more severe disease were compared with those with milder disease and with controls. In contrast, ratings of emotional experience were similar for both groups. Depression was positively associated with emotion ratings, but not with measures of facial activity. Spontaneous facial expression appears to be selectively affected in PD, whereas posed expression and emotional experience remain relatively intact. (JINS, 1996, 2, 383–391.)


2015 ◽  
Vol 282 (1799) ◽  
pp. 20142288 ◽  
Author(s):  
Evelyne Lepron ◽  
Michaël Causse ◽  
Chlöé Farrer

Being held responsible for our actions strongly determines our moral judgements and decisions. This study examined whether responsibility also influences our affective reaction to others' emotions. We conducted two experiments in order to assess the effect of responsibility and of a sense of agency (the conscious feeling of controlling an action) on the empathic response to pain. In both experiments, participants were presented with video clips showing an actor's facial expression of pain of varying intensity. The empathic response was assessed with behavioural (pain intensity estimation from facial expressions and unpleasantness for the observer ratings) and electrophysiological measures (facial electromyography). Experiment 1 showed enhanced empathic response (increased unpleasantness for the observer and facial electromyography responses) as participants' degree of responsibility for the actor's pain increased. This effect was mainly accounted for by the decisional component of responsibility (compared with the execution component). In addition, experiment 2 found that participants' unpleasantness rating also increased when they had a sense of agency over the pain, while controlling for decision and execution processes. The findings suggest that increased empathy induced by responsibility and a sense of agency may play a role in regulating our moral conduct.


2020 ◽  
Vol 13 (3) ◽  
pp. 55-73
Author(s):  
V.A. Barabanschikov ◽  
O.A. Korolkova

The article provides a review of experimental studies of interpersonal perception on the material of static and dynamic facial expressions as a unique source of information about the person’s inner world. The focus is on the patterns of perception of a moving face, included in the processes of communication and joint activities (an alternative to the most commonly studied perception of static images of a person outside of a behavioral context). The review includes four interrelated topics: face statics and dynamics in the recognition of emotional expressions; specificity of perception of moving face expressions; multimodal integration of emotional cues; generation and perception of facial expressions in communication processes. The analysis identifies the most promising areas of research of face in motion. We show that the static and dynamic modes of facial perception complement each other, and describe the role of qualitative features of the facial expression dynamics in assessing the emotional state of a person. Facial expression is considered as part of a holistic multimodal manifestation of emotions. The importance of facial movements as an instrument of social interaction is emphasized.


2018 ◽  
Author(s):  
Fraser W. Smith ◽  
Marie L Smith

AbstractFaces transmit a wealth of important social signals. While previous studies have elucidated the network of cortical regions important for perception of facial expression, and the associated temporal components such as the P100, N170 and EPN, it is still unclear how task constraints may shape the representation of facial expression (or other face categories) in these networks. In the present experiment, we investigate the neural information available across time about two important face categories (expression and identity) when those categories are either perceived under explicit (e.g. decoding emotion when task is on emotion) or implicit task contexts (e.g. decoding emotion when task is on identity). Decoding of both face categories, across both task contexts, peaked in a 100-200ms time-window post-stimulus (across posterior electrodes). Peak decoding of expression, however, was not affected by task context whereas peak decoding of identity was significantly reduced under implicit processing conditions. In addition, errors in EEG decoding correlated with errors in behavioral categorization under explicit processing for both expression and identity, but only with implicit decoding of expression. Despite these differences, decoding time-courses and the spatial pattern of informative electrodes differed consistently for both tasks across explicit Vs implicit face processing. Finally our results show that information about both face identity and facial expression is available around the N170 time-window on lateral occipito-temporal sites. Taken together, these results reveal differences and commonalities in the processing of face categories under explicit Vs implicit task contexts and suggest that facial expressions are processed to a richer degree even under implicit processing conditions, consistent with prior work indicating the relative automaticity by which emotion is processed. Our work further demonstrates the utility in applying multivariate decoding analyses to EEG for revealing the dynamics of face perception.


2019 ◽  
Vol 8 (2) ◽  
pp. 2728-2740 ◽  

Facial expressions are the facial changes in light of a man's interior enthusiastic moods, aims, or social interchanges which are investigated by computer frameworks that endeavor to consequently examine and perceive facial movements and facial component changes from visual data. Now and again the facial expression recognition has been mistaken for feeling examination in the computer vision space prompts uncouth backings of acknowledgment process such as face detection, feature recognition and expression recognition in that way bringing about the issues of identifying impediments, enlightenments, posture varieties, acknowledgment, decrease in dimensionality, and so forth. Notwithstanding that, an appropriate computation and forecast of exact outcomes additionally enhances the execution of the facial Expression recognition. Henceforth, a detailed study was required about the strategies and systems utilized for unraveling the issues of facial expressions during the time of face detection, feature recognition and expression recognition. So thepaper displayed different current strategies and afterward basically considered the effort by the different researchers in the area of Facial Expression Recognition.


2018 ◽  
Vol 4 ◽  
pp. 25
Author(s):  
David Alberto Rodriguez Medina ◽  
Benjamín Domínguez Trejo ◽  
Irving Armando Cruz Albarrán ◽  
Luis Morales Hernández ◽  
Gerardo Leija Alva ◽  
...  

The presence of alexithymia (difficulty in recognizing and expressing emotions and feelings) is one of the psychological factors that has been studied in patients with chronic pain. Different psychological strategies have been used for its management; however, none of them regulates the autonomic activity. We present the case of a 74-year-old female patient diagnosed with rheumatoid arthritis with alexithymia. For twelve years he has been taking pregabalin for pain. The main objective of this case study was to perform a biopsychosocial evaluation of pain (level of interleukin 6 concentration, to evaluate the inflammatory appearance, psychophysiological nasal thermal evaluation and psychosocial measures associated with pain). He was presented videos with affective scenes of various emotions (joy, sadness, fear, pain, anger). The results show that, when the patient observes the videos, there is little nasal thermal variability. However, when facial movements are induced for 10 seconds of a facial expression, a thermal variation is reached around 1 ° C. The induced facial expressions that decrease the temperature are those of anger and pain, which coincide with the priority needs of the patient according to the biopsychosocial profile. The results are discussed in the clinical context of the use of facial expressions to promote autonomic regulation in this population.


2021 ◽  
Vol 16 (1) ◽  
pp. 95-101
Author(s):  
Dibakar Raj Pant ◽  
Rolisha Sthapit

Facial expressions are due to the actions of the facial muscles located at different facial regions. These expressions are two types: Macro and Micro expressions. The second one is more important in computer vision. Analysis of micro expressions categorized by disgust, happiness, anger, sadness, surprise, contempt, and fear are challenging because of very fast and subtle facial movements. This article presents one machine learning method: Haar and two deep learning methods: Convolution Neural Network (CNN) and Recurrent Neural Network (RNN) to perform recognition of micro-facial expression analysis. First, Haar Cascade Classifier is used to detect the face as a pre-image-processing step. Secondly, those detected faces are passed through series of Convolutional Neural Network (CNN) layers for the features extraction. Thirdly, the Recurrent Neural Network (RNN) classifies micro facial expressions. Two types of data sets are used for training and testing of the proposed method: Chinese Academy of Sciences Micro-Expression II (CSAME II) and Spontaneous Actions and Micro-Movements (SAMM) database. The test accuracy of SAMM and CASME II are obtained as 84.76%, and 87% respectively. In addition, the distinction between micro facial expressions and non- micro facial expressions are analyzed by the ROC curve.


2020 ◽  
Author(s):  
Jonathan Yi ◽  
Philip Pärnamets ◽  
Andreas Olsson

Responding appropriately to others’ facial expressions is key to successful social functioning. Despite the large body of work on face perception and spontaneous responses to static faces, little is known about responses to faces in dynamic, naturalistic situations, and no study has investigated how goal directed responses to faces are influenced by learning during dyadic interactions. To experimentally model such situations, we developed a novel method based on online integration of electromyography (EMG) signals from the participants’ face (corrugator supercilii and zygomaticus major) during facial expression exchange with dynamic faces displaying happy and angry facial expressions. Fifty-eight participants learned by trial-and-error to avoid receiving aversive stimulation by either reciprocate (congruently) or respond opposite (incongruently) to the expression of the target face. Our results validated our method, showing that participants learned to optimize their facial behavior, and replicated earlier findings of faster and more accurate responses in congruent vs. incongruent conditions. Moreover, participants performed better on trials when confronted with smiling, as compared to frowning, faces, suggesting it might be easier to adapt facial responses to positively associated expressions. Finally, we applied drift diffusion and reinforcement learning models to provide a mechanistic explanation for our findings which helped clarifying the underlying decision-making processes of our experimental manipulation. Our results introduce a new method to study learning and decision-making in facial expression exchange, in which there is a need to gradually adapt facial expression selection to both social and non-social reinforcements.


Sign in / Sign up

Export Citation Format

Share Document