Multimodal validation of facial expression detection software for real-time monitoring of affect in patients with suicidal intent

2016 ◽  
Vol 33 (S1) ◽  
pp. S596-S596 ◽  
Author(s):  
F. Amico ◽  
G. Healy ◽  
M. Arvaneh ◽  
D. Kearney ◽  
E. Mohedano ◽  
...  

Facial expression is an independent and objective marker of affect. Basic emotions (fear, sadness, joy, anger, disgust and surprise) have been shown to be universal across human cultures. Techniques such as the Facial Action Coding System can capture emotion with good reliability. Such techniques visually process the changes in different assemblies of facial muscles that produce the facial expression of affect.Recent groundbreaking advances in computing and facial expression analysis software now allow real-time and objective measurement of emotional states. In particular, a recently developed software package and equipment, the Imotion Attention Tool™, allows capturing information on discreet emotional states based on facial expressions while a subject is participating in a behavioural task.Extending preliminary work by further experimentation and analysis, the present findings suggests a link between facial affect data to already established peripheral arousal measures such as event related potentials (ERP), heart rate variability (HRV) and galvanic skin response (GSR) using disruptively innovative, noninvasive and clinically applicable technology in patients reporting suicidal ideation and intent compared to controls. Our results hold promise for the establishment of a computerized diagnostic battery that can be utilized by clinicians to improve the evaluation of suicide risk.Disclosure of interestThe authors have not supplied their declaration of competing interest.

2021 ◽  
Vol 15 ◽  
Author(s):  
Minju Kim ◽  
Jongsu Kim ◽  
Dojin Heo ◽  
Yunjoo Choi ◽  
Taejun Lee ◽  
...  

Using P300-based brain–computer interfaces (BCIs) in daily life should take into account the user’s emotional state because various emotional conditions are likely to influence event-related potentials (ERPs) and consequently the performance of P300-based BCIs. This study aimed at investigating whether external emotional stimuli affect the performance of a P300-based BCI, particularly built for controlling home appliances. We presented a set of emotional auditory stimuli to subjects, which had been selected for each subject based on individual valence scores evaluated a priori, while they were controlling an electric light device using a P300-based BCI. There were four conditions regarding the auditory stimuli, including high valence, low valence, noise, and no sound. As a result, subjects controlled the electric light device using the BCI in real time with a mean accuracy of 88.14%. The overall accuracy and P300 features over most EEG channels did not show a significant difference between the four auditory conditions (p > 0.05). When we measured emotional states using frontal alpha asymmetry (FAA) and compared FAA across the auditory conditions, we also found no significant difference (p > 0.05). Our results suggest that there is no clear evidence to support a hypothesis that external emotional stimuli influence the P300-based BCI performance or the P300 features while people are controlling devices using the BCI in real time. This study may provide useful information for those who are concerned with the implementation of a P300-based BCI in practice.


Beverages ◽  
2020 ◽  
Vol 6 (2) ◽  
pp. 27 ◽  
Author(s):  
Samuel J. Kessler ◽  
Funan Jiang ◽  
R. Andrew Hurley

In the late 1970s, analysis of facial expressions to unveil emotional states began to grow and flourish along with new technologies and software advances. Researchers have always been able to document what consumers do, but understanding how consumers feel at a specific moment in time is an important part of the product development puzzle. Because of this, biometric testing methods have been used in numerous studies, as researchers have worked to develop a more comprehensive understanding of consumers. Despite the many articles on automated facial expression analysis (AFEA), literature is limited in regard to food and beverage studies. There are no standards to guide researchers in setting up materials, processing data, or conducting a study, and there are few, if any, compilations of the studies that have been performed to determine whether any methodologies work better than others or what trends have been found. Through a systematic Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) review, 38 articles were found that were relevant to the research goals. The authors identified AFEA study methods that have worked and those that have not been as successful and noted any trends of particular importance. Key takeaways include a listing of commercial AFEA software, experimental methods used within the PRISMA analysis, and a comprehensive explanation of the critical methods and practices of the studies analyzed. Key information was analyzed and compared to determine effects on the study outcomes. Through analyzing the various studies, suggestions and guidance for conducting and analyzing data from AFEA experiments are discussed.


2017 ◽  
Vol 41 (S1) ◽  
pp. S635-S635
Author(s):  
B. Sutcubasi Kaya ◽  
B. Metin ◽  
F.Z. Krzan ◽  
N. Tarhan ◽  
C. Tas

IntroductionAlterations in reward processing are frequently reported in ADHD. One important factor that affects reward processing is the quality of reward, as social and monetary, rewards are processed by different neural networks. However, effect of reward type on reward processing in ADHD was not extensively studied.AimsWe aimed to explore the effect of reward type (i.e., social or monetary) on different phases of reward processing and also to test the hypothesis that ADHD symptoms may be associated with a problem in processing of social rewards.MethodsWe recorded event-related potentials (ERPs) during a spatial attention paradigm in which cues heralded availability and type of the upcoming reward and feedbacks informed about the reward earned. Thirty-nine (19 males and 20 females) healthy individuals (age range: 19–27) participated in the study. ADHD symptoms were measured using ADHD self-report scale (ASRS).ResultsThe feedback related potentials, namely feedback related negativity (FRN), P200 and P300 amplitudes, were larger for social rewards compared to monetary rewards (Fig. 1). There was a consistent negative correlation between the hyperactivity subscale of ASRS and almost all feedback related ERPs. ERP amplitudes after social rewards were smaller for individuals with more hyperactivity.ConclusionsOur findings suggest that hypo responsiveness to social rewards may be associated with hyperactivity. However, the results have to be confirmed with clinical populations.Disclosure of interestThe authors have not supplied their declaration of competing interest.


2017 ◽  
Vol 41 (S1) ◽  
pp. S171-S172 ◽  
Author(s):  
L. Gu

IntroductionPrevious studies provided inconsistent evidences for the effect of apolipoprotein E ɛ4 (APOE ɛ4) status on the visuospatial working memory (VSWM). Our study was the first investigation with event-related potential (ERP) to explore the effect of APOE ɛ4 on VSWM in healthy elders and aMCI patients.ObjectiveThe aim was to investigate the effect of APOE ɛ4 on VSWM with event-related potential (ERP) study in healthy elders and aMCI patients.MethodsThirty-nine aMCI patients (27 APOE ɛ4 non-carriers and 12 APOE ɛ4 carriers) and 43 their matched control (25 APOE ɛ4 non-carriers and 18 APOE ɛ4 carriers) performed an N-back task, a VSWM paradigm that manipulated the number of items to be stored in memory.ResultsOur study detected reduced accuracy and delayed mean correct response time in aMCI patients than healthy elders. P300 was elicited by VSWM and its amplitude was lower in aMCI patients at the central-parietal and parietal electrodes than healthy controls. In healthy elders, P300 amplitude declined prior to task performance change in APOE ɛ4 carriers than non-carriers. Regarding aMCI patients, P300 amplitude result revealed exacerbated VSWM deficits in APOE ɛ4 carriers than APOE ɛ4 non-carriers. Additionally, standardized low-resolution brain electromagnetic tomography analysis (s-LORETA) result showed enhanced brain activation in right parahippocampal gyrus during P300 time range in APOE ɛ4 carriers than non-carriers in aMCI patients (Fig. 1, Tables 1 and 2).ConclusionsIt demonstrated that P300 amplitude might serve as a biomarker for recognizing aMCI patients and contribute to early detection of worse VSWM in APOE ɛ4 carriers than non-carriers.Disclosure of interestThe author has not supplied his/her declaration of competing interest.


Human feelings are mental conditions of sentiments that emerge immediately as opposed to cognitive exertion. Some of the basic feelings are happy, angry, neutral, sad and surprise. These internal feelings of a person are reflected on the face as Facial Expressions. This paper presents a novel methodology for Facial Expression Analysis which will aid to develop a facial expression recognition system. This system can be used in real time to classify five basic emotions. The recognition of facial expressions is important because of its applications in many domains such as artificial intelligence, security and robotics. Many different approaches can be used to overcome the problems of Facial Expression Recognition (FER) but the best suited technique for automated FER is Convolutional Neural Networks(CNN). Thus, a novel CNN architecture is proposed and a combination of multiple datasets such as FER2013, FER+, JAFFE and CK+ is used for training and testing. This helps to improve the accuracy and develop a robust real time system. The proposed methodology confers quite good results and the obtained accuracy may give encouragement and offer support to researchers to build better models for Automated Facial Expression Recognition systems.


2021 ◽  
Author(s):  
Sandra Naumann ◽  
Mareike Bayer ◽  
Simone Kirst ◽  
Elke van der Meer ◽  
Isabel Dziobek

The development of socio-emotional competencies (SEC) has proven key for school and life success as well as for preventing mental illness. Digital SEC trainings create new ways to strengthen children’s mental health especially in times of disrupted childcare and subsequent increase of mental health problems due to the COVID-19 pandemic. Despite the potential benefits, few studies examined the effectiveness of digital SEC trainings in young children. In a six-week study, we tested the digital SEC training Zirkus Empathico with four- to six-year-old typically developing children (N = 60) using parent and child SEC ratings as well as EEG. The registered primary outcome was empathy (GEM, EMK 3-6); secondary outcomes included emotion knowledge (EMK 3-6), prosocial behavior (SDQ), reduction of problematic behaviors (SDQ), and children’s neural sensitivity to facial expressions quantified with early (P1, N170) and late (P3) event-related potentials. Compared to age- and gender-matched controls (N = 30), the Zirkus Empathico group (N = 30) showed increases in empathy, emotion recognition, prosocial behavior and reduced behavioral problems post-training and increases in empathy in a three months follow-up. Zirkus Empathico participants had larger P3 amplitudes for happy vs. neutral facial expressions, whereas larger P3 amplitudes for angry vs. neutral facial expressions were found for controls. Given the training group’s improvements across behavioral measures, Zirkus Empathico may be a promising digital SEC training. EEG results seem to corroborate behavioral findings: The training group allocated more neural resources toward happy faces potentially indicative of training-induced, accelerated maturation regarding the regulation of positive emotional states.


2021 ◽  
Author(s):  
Arianna Schiano Lomoriello ◽  
Antonio Maffei ◽  
Sabrina Brigadoi ◽  
Paola Sessa

Simulation models of facial expressions suggest that posterior visual areas and brain areas underpinning sensorimotor simulations might interact to improve facial expression processing. According to these models, facial mimicry, a manifestation of sensorimotor simulation, may contribute to the visual processing of facial expressions by influencing early stages. The aim of this study was to assess whether and how sensorimotor simulation influences early stages of face processing, also investigating its relationship with alexithymic traits given that previous studies have suggested that individuals with high levels of alexithymic traits (vs. individuals with low levels of alexithymic traits) tend to use sensorimotor simulation to a lesser extent. We monitored P1 and N170 ERP components of the event-related potentials (ERP) in participants performing a fine discrimination task of facial expressions and animals, as a control condition. In half of the experiment, participants could freely use their facial mimicry whereas in the other half they had their facial mimicry blocked by a gel. Our results revealed that only individuals with lower compared to high alexithymic traits showed a larger modulation of the P1 amplitude as a function of the mimicry manipulation selectively for facial expressions (but not for animals), while we did not observe any modulation of the N170. Given the null results at the behavioural level, we interpreted the P1 modulation as compensative visual processing in individuals with low levels of alexithymia under conditions of interference on the sensorimotor processing, providing a preliminary evidence in favor of sensorimotor simulation models.


Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2578
Author(s):  
Yu-Jin Hong ◽  
Sung Eun Choi ◽  
Gi Pyo Nam ◽  
Heeseung Choi ◽  
Junghyun Cho ◽  
...  

Facial expressions are one of the important non-verbal ways used to understand human emotions during communication. Thus, acquiring and reproducing facial expressions is helpful in analyzing human emotional states. However, owing to complex and subtle facial muscle movements, facial expression modeling from images with face poses is difficult to achieve. To handle this issue, we present a method for acquiring facial expressions from a non-frontal single photograph using a 3D-aided approach. In addition, we propose a contour-fitting method that improves the modeling accuracy by automatically rearranging 3D contour landmarks corresponding to fixed 2D image landmarks. The acquired facial expression input can be parametrically manipulated to create various facial expressions through a blendshape or expression transfer based on the FACS (Facial Action Coding System). To achieve a realistic facial expression synthesis, we propose an exemplar-texture wrinkle synthesis method that extracts and synthesizes appropriate expression wrinkles according to the target expression. To do so, we constructed a wrinkle table of various facial expressions from 400 people. As one of the applications, we proved that the expression-pose synthesis method is suitable for expression-invariant face recognition through a quantitative evaluation, and showed the effectiveness based on a qualitative evaluation. We expect our system to be a benefit to various fields such as face recognition, HCI, and data augmentation for deep learning.


Sign in / Sign up

Export Citation Format

Share Document