scholarly journals Nasal thermal activity during voluntary facial expression in a patient with chronic pain and alexithymia

2018 ◽  
Vol 4 ◽  
pp. 25
Author(s):  
David Alberto Rodriguez Medina ◽  
Benjamín Domínguez Trejo ◽  
Irving Armando Cruz Albarrán ◽  
Luis Morales Hernández ◽  
Gerardo Leija Alva ◽  
...  

The presence of alexithymia (difficulty in recognizing and expressing emotions and feelings) is one of the psychological factors that has been studied in patients with chronic pain. Different psychological strategies have been used for its management; however, none of them regulates the autonomic activity. We present the case of a 74-year-old female patient diagnosed with rheumatoid arthritis with alexithymia. For twelve years he has been taking pregabalin for pain. The main objective of this case study was to perform a biopsychosocial evaluation of pain (level of interleukin 6 concentration, to evaluate the inflammatory appearance, psychophysiological nasal thermal evaluation and psychosocial measures associated with pain). He was presented videos with affective scenes of various emotions (joy, sadness, fear, pain, anger). The results show that, when the patient observes the videos, there is little nasal thermal variability. However, when facial movements are induced for 10 seconds of a facial expression, a thermal variation is reached around 1 ° C. The induced facial expressions that decrease the temperature are those of anger and pain, which coincide with the priority needs of the patient according to the biopsychosocial profile. The results are discussed in the clinical context of the use of facial expressions to promote autonomic regulation in this population.

2021 ◽  
Vol 11 (4) ◽  
pp. 1428
Author(s):  
Haopeng Wu ◽  
Zhiying Lu ◽  
Jianfeng Zhang ◽  
Xin Li ◽  
Mingyue Zhao ◽  
...  

This paper addresses the problem of Facial Expression Recognition (FER), focusing on unobvious facial movements. Traditional methods often cause overfitting problems or incomplete information due to insufficient data and manual selection of features. Instead, our proposed network, which is called the Multi-features Cooperative Deep Convolutional Network (MC-DCN), maintains focus on the overall feature of the face and the trend of key parts. The processing of video data is the first stage. The method of ensemble of regression trees (ERT) is used to obtain the overall contour of the face. Then, the attention model is used to pick up the parts of face that are more susceptible to expressions. Under the combined effect of these two methods, the image which can be called a local feature map is obtained. After that, the video data are sent to MC-DCN, containing parallel sub-networks. While the overall spatiotemporal characteristics of facial expressions are obtained through the sequence of images, the selection of keys parts can better learn the changes in facial expressions brought about by subtle facial movements. By combining local features and global features, the proposed method can acquire more information, leading to better performance. The experimental results show that MC-DCN can achieve recognition rates of 95%, 78.6% and 78.3% on the three datasets SAVEE, MMI, and edited GEMEP, respectively.


2019 ◽  
Vol 9 (11) ◽  
pp. 2218 ◽  
Author(s):  
Maria Grazia Violante ◽  
Federica Marcolin ◽  
Enrico Vezzetti ◽  
Luca Ulrich ◽  
Gianluca Billia ◽  
...  

This study proposes a novel quality function deployment (QFD) design methodology based on customers’ emotions conveyed by facial expressions. The current advances in pattern recognition related to face recognition techniques have fostered the cross-fertilization and pollination between this context and other fields, such as product design and human-computer interaction. In particular, the current technologies for monitoring human emotions have supported the birth of advanced emotional design techniques, whose main focus is to convey users’ emotional feedback into the design of novel products. As quality functional deployment aims at transforming the voice of customers into engineering features of a product, it appears to be an appropriate and promising nest in which to embed users’ emotional feedback with new emotional design methodologies, such as facial expression recognition. This way, the present methodology consists in interviewing the user and acquiring his/her face with a depth camera (allowing three-dimensional (3D) data), clustering the face information into different emotions with a support vector machine classificator, and assigning customers’ needs weights relying on the detected facial expressions. The proposed method has been applied to a case study in the context of agriculture and validated by a consortium. The approach appears sound and capable of collecting the unconscious feedback of the interviewee.


2021 ◽  
Vol 12 ◽  
Author(s):  
Xunbing Shen ◽  
Gaojie Fan ◽  
Caoyuan Niu ◽  
Zhencai Chen

High stakes can be stressful whether one is telling the truth or lying. However, liars can feel extra fear from worrying to be discovered than truth-tellers, and according to the “leakage theory,” the fear is almost impossible to be repressed. Therefore, we assumed that analyzing the facial expression of fear could reveal deceits. Detecting and analyzing the subtle leaked fear facial expressions is a challenging task for laypeople. It is, however, a relatively easy job for computer vision and machine learning. To test the hypothesis, we analyzed video clips from a game show “The moment of truth” by using OpenFace (for outputting the Action Units (AUs) of fear and face landmarks) and WEKA (for classifying the video clips in which the players were lying or telling the truth). The results showed that some algorithms achieved an accuracy of >80% merely using AUs of fear. Besides, the total duration of AU20 of fear was found to be shorter under the lying condition than that from the truth-telling condition. Further analysis found that the reason for a shorter duration in the lying condition was that the time window from peak to offset of AU20 under the lying condition was less than that under the truth-telling condition. The results also showed that facial movements around the eyes were more asymmetrical when people are telling lies. All the results suggested that facial clues can be used to detect deception, and fear could be a cue for distinguishing liars from truth-tellers.


2020 ◽  
Vol 13 (3) ◽  
pp. 55-73
Author(s):  
V.A. Barabanschikov ◽  
O.A. Korolkova

The article provides a review of experimental studies of interpersonal perception on the material of static and dynamic facial expressions as a unique source of information about the person’s inner world. The focus is on the patterns of perception of a moving face, included in the processes of communication and joint activities (an alternative to the most commonly studied perception of static images of a person outside of a behavioral context). The review includes four interrelated topics: face statics and dynamics in the recognition of emotional expressions; specificity of perception of moving face expressions; multimodal integration of emotional cues; generation and perception of facial expressions in communication processes. The analysis identifies the most promising areas of research of face in motion. We show that the static and dynamic modes of facial perception complement each other, and describe the role of qualitative features of the facial expression dynamics in assessing the emotional state of a person. Facial expression is considered as part of a holistic multimodal manifestation of emotions. The importance of facial movements as an instrument of social interaction is emphasized.


2019 ◽  
Vol 8 (2) ◽  
pp. 2728-2740 ◽  

Facial expressions are the facial changes in light of a man's interior enthusiastic moods, aims, or social interchanges which are investigated by computer frameworks that endeavor to consequently examine and perceive facial movements and facial component changes from visual data. Now and again the facial expression recognition has been mistaken for feeling examination in the computer vision space prompts uncouth backings of acknowledgment process such as face detection, feature recognition and expression recognition in that way bringing about the issues of identifying impediments, enlightenments, posture varieties, acknowledgment, decrease in dimensionality, and so forth. Notwithstanding that, an appropriate computation and forecast of exact outcomes additionally enhances the execution of the facial Expression recognition. Henceforth, a detailed study was required about the strategies and systems utilized for unraveling the issues of facial expressions during the time of face detection, feature recognition and expression recognition. So thepaper displayed different current strategies and afterward basically considered the effort by the different researchers in the area of Facial Expression Recognition.


2009 ◽  
Vol 105 (1) ◽  
pp. 232-234
Author(s):  
Ayumu Goukon ◽  
Toru Suzuki ◽  
Kazuhito Noguchi

In this study, a case (HY) is described. This man, now 25 yr. old, lived in a persistent vegetative state for 6 yr. after encephalitis at the age of 10 yr. He was reportedly impaired at recognizing fear, and in everyday life, apparently had impaired recognition of anger as well. In testing with facial expressions, no obvious differences between HY and normal controls in anger perceptions were found. In this study, Japanese and Caucasian models of facial expression were used; on these tests, HY was impaired at recognizing facial expressions of anger only in the Japanese models.


2021 ◽  
Author(s):  
Xunbing Shen ◽  
Gaojie Fan ◽  
Caoyuan Niu ◽  
Zhencai Chen

AbstractThe leakage theory in the field of deception detection predicted that liars could not repress the leaked felt emotions (e.g., the fear or delight); and people who were lying would feel fear (to be discovered), especially under the high-stake situations. Therefore, we assumed that the aim of revealing deceits could be reached via analyzing the facial expression of fear. Detecting and analyzing the subtle leaked fear facial expressions is a challenging task for laypeople. It is, however, a relatively easy job for computer vision and machine learning. To test the hypothesis, we analyzed video clips from a game show “The moment of truth” by using OpenFace (for outputting the Action Units of fear and face landmarks) and WEKA (for classifying the video clips in which the players was lying or telling the truth). The results showed that some algorithms could achieve an accuracy of greater than 80% merely using AUs of fear. Besides, the total durations of AU 20 of fear were found to be shorter under the lying condition than under the truth-telling condition. Further analysis found the cause why durations of fear were shorter was that the duration from peak to offset of AU20 under the lying condition was less than that under the truth-telling condition. The results also showed that the facial movements around the eyes were more asymmetrical while people telling lies. All the results suggested that there do exist facial clues to deception, and fear could be a cue for distinguishing liars from truth-tellers.


2021 ◽  
Vol 16 (1) ◽  
pp. 95-101
Author(s):  
Dibakar Raj Pant ◽  
Rolisha Sthapit

Facial expressions are due to the actions of the facial muscles located at different facial regions. These expressions are two types: Macro and Micro expressions. The second one is more important in computer vision. Analysis of micro expressions categorized by disgust, happiness, anger, sadness, surprise, contempt, and fear are challenging because of very fast and subtle facial movements. This article presents one machine learning method: Haar and two deep learning methods: Convolution Neural Network (CNN) and Recurrent Neural Network (RNN) to perform recognition of micro-facial expression analysis. First, Haar Cascade Classifier is used to detect the face as a pre-image-processing step. Secondly, those detected faces are passed through series of Convolutional Neural Network (CNN) layers for the features extraction. Thirdly, the Recurrent Neural Network (RNN) classifies micro facial expressions. Two types of data sets are used for training and testing of the proposed method: Chinese Academy of Sciences Micro-Expression II (CSAME II) and Spontaneous Actions and Micro-Movements (SAMM) database. The test accuracy of SAMM and CASME II are obtained as 84.76%, and 87% respectively. In addition, the distinction between micro facial expressions and non- micro facial expressions are analyzed by the ROC curve.


2020 ◽  
Author(s):  
Jonathan Yi ◽  
Philip Pärnamets ◽  
Andreas Olsson

Responding appropriately to others’ facial expressions is key to successful social functioning. Despite the large body of work on face perception and spontaneous responses to static faces, little is known about responses to faces in dynamic, naturalistic situations, and no study has investigated how goal directed responses to faces are influenced by learning during dyadic interactions. To experimentally model such situations, we developed a novel method based on online integration of electromyography (EMG) signals from the participants’ face (corrugator supercilii and zygomaticus major) during facial expression exchange with dynamic faces displaying happy and angry facial expressions. Fifty-eight participants learned by trial-and-error to avoid receiving aversive stimulation by either reciprocate (congruently) or respond opposite (incongruently) to the expression of the target face. Our results validated our method, showing that participants learned to optimize their facial behavior, and replicated earlier findings of faster and more accurate responses in congruent vs. incongruent conditions. Moreover, participants performed better on trials when confronted with smiling, as compared to frowning, faces, suggesting it might be easier to adapt facial responses to positively associated expressions. Finally, we applied drift diffusion and reinforcement learning models to provide a mechanistic explanation for our findings which helped clarifying the underlying decision-making processes of our experimental manipulation. Our results introduce a new method to study learning and decision-making in facial expression exchange, in which there is a need to gradually adapt facial expression selection to both social and non-social reinforcements.


2020 ◽  
Author(s):  
Joshua W Maxwell ◽  
Eric Ruthruff ◽  
michael joseph

Are facial expressions of emotion processed automatically? Some authors have not found this to be the case (Tomasik et al., 2009). Here we revisited the question with a novel experimental logic – the backward correspondence effect (BCE). In three dual-task studies, participants first categorized a sound (Task 1) and then indicated the location of a target face (Task 2). In Experiment 1, Task 2 required participants to search for one facial expression of emotion (angry or happy). We observed positive BCEs, indicating that facial expressions of emotion bypassed the central attentional bottleneck and thus were processed in a capacity-free, automatic manner. In Experiment 2, we replicated this effect but found that morphed emotional expressions (which were used by Tomasik) were not processed automatically. In Experiment 3, we observed similar BCEs for another type of face processing previously shown to be capacity-free – identification of familiar faces (Jung et al., 2013). We conclude that facial expressions of emotion are identified automatically when sufficiently unambiguous.


Sign in / Sign up

Export Citation Format

Share Document