scholarly journals Detection and Recognition of Fearful Facial Expressions During the Coronavirus Disease (COVID-19) Pandemic in an Italian Sample: An Online Experiment

2020 ◽  
Vol 11 ◽  
Author(s):  
Federica Scarpina
Author(s):  
Ramadan TH. Hasan ◽  
◽  
Amira Bibo Sallow ◽  

Intel's OpenCV is a free and open-access image and video processing library. It is linked to computer vision, like feature and object recognition and machine learning. This paper presents the main OpenCV modules, features, and OpenCV based on Python. The paper also presents common OpenCV applications and classifiers used in these applications like image processing, face detection, face recognition, and object detection. Finally, we discuss some literary reviews of OpenCV applications in the fields of computer vision such as face detection and recognition, or recognition of facial expressions such as sadness, anger, happiness, or recognition of the gender and age of a person.


PLoS ONE ◽  
2021 ◽  
Vol 16 (7) ◽  
pp. e0254438
Author(s):  
Federica Scarpina ◽  
Marco Godi ◽  
Stefano Corna ◽  
Ionathan Seitanidis ◽  
Paolo Capodaglio ◽  
...  

Evidence about the psychological functioning in individuals who survived the COVID-19 infectious is still rare in the literature. In this paper, we investigated fearful facial expressions recognition, as a behavioural means to assess psychological functioning. From May 15th, 2020 to January 30th, 2021, we enrolled sixty Italian individuals admitted in multiple Italian COVID-19 post-intensive care units. The detection and recognition of fearful facial expressions were assessed through an experimental task grounded on an attentional mechanism (i.e., the redundant target effect). According to the results, our participants showed an altered behaviour in detecting and recognizing fearful expressions. Specifically, their performance was in disagreement with the expected behavioural effect. Our study suggested altered processing of fearful expressions in individuals who survived the COVID-19 infectious. Such a difficulty might represent a crucial sign of psychological distress and it should be addressed in tailored psychological interventions in rehabilitative settings and after discharge.


2022 ◽  
Vol 12 ◽  
Author(s):  
Zizhao Dong ◽  
Gang Wang ◽  
Shaoyuan Lu ◽  
Jingting Li ◽  
Wenjing Yan ◽  
...  

Facial expressions are a vital way for humans to show their perceived emotions. It is convenient for detecting and recognizing expressions or micro-expressions by annotating a lot of data in deep learning. However, the study of video-based expressions or micro-expressions requires that coders have professional knowledge and be familiar with action unit (AU) coding, leading to considerable difficulties. This paper aims to alleviate this situation. We deconstruct facial muscle movements from the motor cortex and systematically sort out the relationship among facial muscles, AU, and emotion to make more people understand coding from the basic principles: We derived the relationship between AU and emotion based on a data-driven analysis of 5,000 images from the RAF-AU database, along with the experience of professional coders.We discussed the complex facial motor cortical network system that generates facial movement properties, detailing the facial nucleus and the motor system associated with facial expressions.The supporting physiological theory for AU labeling of emotions is obtained by adding facial muscle movements patterns.We present the detailed process of emotion labeling and the detection and recognition of AU.Based on the above research, the video's coding of spontaneous expressions and micro-expressions is concluded and prospected.


2019 ◽  
Vol 11 (5) ◽  
pp. 105 ◽  
Author(s):  
Yongrui Huang ◽  
Jianhao Yang ◽  
Siyu Liu ◽  
Jiahui Pan

Emotion recognition plays an essential role in human–computer interaction. Previous studies have investigated the use of facial expression and electroencephalogram (EEG) signals from single modal for emotion recognition separately, but few have paid attention to a fusion between them. In this paper, we adopted a multimodal emotion recognition framework by combining facial expression and EEG, based on a valence-arousal emotional model. For facial expression detection, we followed a transfer learning approach for multi-task convolutional neural network (CNN) architectures to detect the state of valence and arousal. For EEG detection, two learning targets (valence and arousal) were detected by different support vector machine (SVM) classifiers, separately. Finally, two decision-level fusion methods based on the enumerate weight rule or an adaptive boosting technique were used to combine facial expression and EEG. In the experiment, the subjects were instructed to watch clips designed to elicit an emotional response and then reported their emotional state. We used two emotion datasets—a Database for Emotion Analysis using Physiological Signals (DEAP) and MAHNOB-human computer interface (MAHNOB-HCI)—to evaluate our method. In addition, we also performed an online experiment to make our method more robust. We experimentally demonstrated that our method produces state-of-the-art results in terms of binary valence/arousal classification, based on DEAP and MAHNOB-HCI data sets. Besides this, for the online experiment, we achieved 69.75% accuracy for the valence space and 70.00% accuracy for the arousal space after fusion, each of which has surpassed the highest performing single modality (69.28% for the valence space and 64.00% for the arousal space). The results suggest that the combination of facial expressions and EEG information for emotion recognition compensates for their defects as single information sources. The novelty of this work is as follows. To begin with, we combined facial expression and EEG to improve the performance of emotion recognition. Furthermore, we used transfer learning techniques to tackle the problem of lacking data and achieve higher accuracy for facial expression. Finally, in addition to implementing the widely used fusion method based on enumerating different weights between two models, we also explored a novel fusion method, applying boosting technique.


2020 ◽  
Vol 8 (5) ◽  
pp. 1720-1723

The research on the facial expression detection or the so-called emotion detection has been multiplying day by day. With effective judgement of feelings, we could get instant feedback from clients, increase better comprehension of the human conduct while utilizing the information technologies and in this way make the frameworks and UIs progressively emphatic and intelligent. A human-PC connection framework for an automatic face recognition or outward appearance recognition has pulled in expanding consideration from specialists in psychology, software engineering, etymology, neuroscience, and related orders. People have consistently had the intrinsic capacity to recognize faces and recognize expressions. Our challenge is to make a computer to do the same. In other words, to make a computer behave and understand like a human. Sounds interesting right. This opens huge amounts of utilizations. Facial Expression detection and Recognition can be utilized to improve access and security like the most recent Apple iPhone does, enable instalments to be handled without physical cards — iPhone does this as well! empower criminal identification and permit personalized healthcare and different administrations. Facial expressions detection and recognition is a vigorously explored point and there are huge amounts of assets on the web. We have attempted different open source activities to locate the ones that are least difficult to actualize while being precise


2003 ◽  
Vol 17 (3) ◽  
pp. 113-123 ◽  
Author(s):  
Jukka M. Leppänen ◽  
Mirja Tenhunen ◽  
Jari K. Hietanen

Abstract Several studies have shown faster choice-reaction times to positive than to negative facial expressions. The present study examined whether this effect is exclusively due to faster cognitive processing of positive stimuli (i.e., processes leading up to, and including, response selection), or whether it also involves faster motor execution of the selected response. In two experiments, response selection (onset of the lateralized readiness potential, LRP) and response execution (LRP onset-response onset) times for positive (happy) and negative (disgusted/angry) faces were examined. Shorter response selection times for positive than for negative faces were found in both experiments but there was no difference in response execution times. Together, these results suggest that the happy-face advantage occurs primarily at premotoric processing stages. Implications that the happy-face advantage may reflect an interaction between emotional and cognitive factors are discussed.


2010 ◽  
Vol 24 (3) ◽  
pp. 186-197 ◽  
Author(s):  
Sandra J. E. Langeslag ◽  
Jan W. Van Strien

It has been suggested that emotion regulation improves with aging. Here, we investigated age differences in emotion regulation by studying modulation of the late positive potential (LPP) by emotion regulation instructions. The electroencephalogram of younger (18–26 years) and older (60–77 years) adults was recorded while they viewed neutral, unpleasant, and pleasant pictures and while they were instructed to increase or decrease the feelings that the emotional pictures elicited. The LPP was enhanced when participants were instructed to increase their emotions. No age differences were observed in this emotion regulation effect, suggesting that emotion regulation abilities are unaffected by aging. This contradicts studies that measured emotion regulation by self-report, yet accords with studies that measured emotion regulation by means of facial expressions or psychophysiological responses. More research is needed to resolve the apparent discrepancy between subjective self-report and objective psychophysiological measures.


Crisis ◽  
2020 ◽  
pp. 1-8
Author(s):  
Chao S. Hu ◽  
Jiajia Ji ◽  
Jinhao Huang ◽  
Zhe Feng ◽  
Dong Xie ◽  
...  

Abstract. Background: High school and university teachers need to advise students against attempting suicide, the second leading cause of death among 15–29-year-olds. Aims: To investigate the role of reasoning and emotion in advising against suicide. Method: We conducted a study with 130 students at a university that specializes in teachers' education. Participants sat in front of a camera, videotaping their advising against suicide. Three raters scored their transcribed advice on "wise reasoning" (i.e., expert forms of reasoning: considering a variety of conditions, awareness of the limitation of one's knowledge, taking others' perspectives). Four registered psychologists experienced in suicide prevention techniques rated the transcripts on the potential for suicide prevention. Finally, using the software Facereader 7.1, we analyzed participants' micro-facial expressions during advice-giving. Results: Wiser reasoning and less disgust predicted higher potential for suicide prevention. Moreover, higher potential for suicide prevention was associated with more surprise. Limitations: The actual efficacy of suicide prevention was not assessed. Conclusion: Wise reasoning and counter-stereotypic ideas that trigger surprise probably contribute to the potential for suicide prevention. This advising paradigm may help train teachers in advising students against suicide, measuring wise reasoning, and monitoring a harmful emotional reaction, that is, disgust.


2016 ◽  
Vol 37 (1) ◽  
pp. 16-23 ◽  
Author(s):  
Chit Yuen Yi ◽  
Matthew W. E. Murry ◽  
Amy L. Gentzler

Abstract. Past research suggests that transient mood influences the perception of facial expressions of emotion, but relatively little is known about how trait-level emotionality (i.e., temperament) may influence emotion perception or interact with mood in this process. Consequently, we extended earlier work by examining how temperamental dimensions of negative emotionality and extraversion were associated with the perception accuracy and perceived intensity of three basic emotions and how the trait-level temperamental effect interacted with state-level self-reported mood in a sample of 88 adults (27 men, 18–51 years of age). The results indicated that higher levels of negative mood were associated with higher perception accuracy of angry and sad facial expressions, and higher levels of perceived intensity of anger. For perceived intensity of sadness, negative mood was associated with lower levels of perceived intensity, whereas negative emotionality was associated with higher levels of perceived intensity of sadness. Overall, our findings added to the limited literature on adult temperament and emotion perception.


2020 ◽  
Vol 51 (5) ◽  
pp. 354-359 ◽  
Author(s):  
Yavor Paunov ◽  
Michaela Wänke ◽  
Tobias Vogel

Abstract. Combining the strengths of defaults and transparency information is a potentially powerful way to induce policy compliance. Despite negative theoretical predictions, a recent line of research revealed that default nudges may become more effective if people are informed why they should exhibit the targeted behavior. Yet, it is an open empirical question whether the increase in compliance came from setting a default and consequently disclosing it, or the provided information was sufficient to deliver the effect on its own. Results from an online experiment indicate that both defaulting and transparency information exert a statistically independent effect on compliance, with highest compliance rates observed in the combined condition. Practical and theoretical implications are discussed.


Sign in / Sign up

Export Citation Format

Share Document