emotion identification
Recently Published Documents


TOTAL DOCUMENTS

205
(FIVE YEARS 85)

H-INDEX

17
(FIVE YEARS 4)

2022 ◽  
Vol 185 ◽  
pp. 111290
Author(s):  
Christen M. Deveney ◽  
Goretty Chavez ◽  
Lynandrea Mejia

Author(s):  
S. R. Ashokkumar ◽  
S. Anupallavi ◽  
G. MohanBabu ◽  
M. Premkumar ◽  
V. Jeevanantham

2021 ◽  
Vol 10 (1) ◽  
pp. 32
Author(s):  
Akhilesh Kumar ◽  
Awadhesh Kumar

Emotion identification and categorization have been emerging in the brain machine interface in the current era. Audio, visual, and electroencephalography (EEG) data have all been shown to be useful for automated emotion identification in a number of studies. EEG-based emotion detection is a critical component of psychiatric health assessment for individuals. If EEG sensor data are collected from multiple experimental sessions or participants, the underlying signals are invariably non-stationary. As EEG signals are noisy, non-stationary, and non-linear, creating an intelligent system that can identify emotions with good accuracy is challenging. Many researchers have shown evidence that EEG brain waves may be used to determine feelings. This study introduces a novel automated emotion identification system that employs deep learning principles to recognize emotions through EEG signals from computer games. EEG data were obtained from 28 distinct participants using 14-channel Emotive Epoc+ portable and wearable EEG equipment. Participants played four distinct emotional computer games for five minutes each, with a total of 20 min of EEG data available for each participant. The suggested framework is simple enough to categorize four classes of emotions during game play. The results demonstrate that the suggested model-based emotion detection framework is a viable method for recognizing emotions from EEG data. The network achieves 99.99% accuracyalong with less computational time.


F1000Research ◽  
2021 ◽  
Vol 9 ◽  
pp. 173
Author(s):  
Marco Bilucaglia ◽  
Gian Marco Duma ◽  
Giovanni Mento ◽  
Luca Semenzato ◽  
Patrizio E. Tressoldi

Machine learning approaches have been fruitfully applied to several neurophysiological signal classification problems. Considering the relevance of emotion in human cognition and behaviour, an important application of machine learning has been found in the field of emotion identification based on neurophysiological activity. Nonetheless, there is high variability in results in the literature depending on the neuronal activity measurement, the signal features and the classifier type. The present work aims to provide new methodological insight into machine learning applied to emotion identification based on electrophysiological brain activity. For this reason, we analysed previously recorded EEG activity measured while emotional stimuli, high and low arousal (auditory and visual) were provided to a group of healthy participants. Our target signal to classify was the pre-stimulus onset brain activity. Classification performance of three different classifiers (LDA, SVM and kNN) was compared using both spectral and temporal features. Furthermore, we also contrasted the performance of static and dynamic (time evolving) approaches. The best static feature-classifier combination was the SVM with spectral features (51.8%), followed by LDA with spectral features (51.4%) and kNN with temporal features (51%). The best dynamic feature‑classifier combination was the SVM with temporal features (63.8%), followed by kNN with temporal features (63.70%) and LDA with temporal features (63.68%). The results show a clear increase in classification accuracy with temporal dynamic features.


2021 ◽  
pp. 025371762110464
Author(s):  
Anjali Thomas Mathai ◽  
Shweta Rai ◽  
Rishikesh V. Behere

Background: The negative appraisal of emotional stimuli is a feature of social anxiety disorder (SAD). People with SAD demonstrate deficits in neurocognitive performance while performing tasks of attention. However, the relationship between attentional control, working memory, and threat perception in SAD has not been studied well. The present study aimed to identify patterns of threat perception in relation to performance on attention and visuospatial working memory tasks in individuals with SAD. Methods: Subjects with SAD ( n = 27) and a healthy comparative (HC) group ( n = 26) completed tasks of sustained and focused attention, visuospatial working memory, computerized emotion identification, and pictorial emotional Stroop. Results: The SAD group had decreased performance in the domains of sustained (P = 0.001) and focused attention (P = 0.04). They also had an enhanced threat perception as demonstrated by greater reaction time to anger (P = 0.03), lower emotion recognition accuracy (P = 0.05), and higher over-identification of the threat to neutral and nonthreatening faces. However, the Stroop effect was not demonstrated across the groups. No group difference was seen in the performance on the visuospatial working memory tasks. Lower focused attention was significantly correlated with higher emotional threat perception (ETP; P = 0.001) in the SAD group. Conclusion: People with SAD have greater deficits in attention processing and ETP. The attention deficits were associated with enhanced ETP in social anxiety. The link between threat perception and cognitive functions would aid in a better understanding of SAD and in planning appropriate intervention.


F1000Research ◽  
2021 ◽  
Vol 9 ◽  
pp. 173
Author(s):  
Marco Bilucaglia ◽  
Gian Marco Duma ◽  
Giovanni Mento ◽  
Luca Semenzato ◽  
Patrizio E. Tressoldi

Machine learning approaches have been fruitfully applied to several neurophysiological signal classification problems. Considering the relevance of emotion in human cognition and behaviour, an important application of machine learning has been found in the field of emotion identification based on neurophysiological activity. Nonetheless, there is high variability in results in the literature depending on the neuronal activity measurement, the signal features and the classifier type. The present work aims to provide new methodological insight into machine learning applied to emotion identification based on electrophysiological brain activity. For this reason, we analysed previously recorded EEG activity measured while emotional stimuli, high and low arousal (auditory and visual) were provided to a group of healthy participants. Our target signal to classify was the pre-stimulus onset brain activity. Classification performance of three different classifiers (LDA, SVM and kNN) was compared using both spectral and temporal features. Furthermore, we also contrasted the performance of static and dynamic (time evolving) approaches. The best static feature-classifier combination was the SVM with spectral features (51.8%), followed by LDA with spectral features (51.4%) and kNN with temporal features (51%). The best dynamic feature‑classifier combination was the SVM with temporal features (63.8%), followed by kNN with temporal features (63.70%) and LDA with temporal features (63.68%). The results show a clear increase in classification accuracy with temporal dynamic features.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Pragati Patel ◽  
Raghunandan R ◽  
Ramesh Naidu Annavarapu

AbstractMany studies on brain–computer interface (BCI) have sought to understand the emotional state of the user to provide a reliable link between humans and machines. Advanced neuroimaging methods like electroencephalography (EEG) have enabled us to replicate and understand a wide range of human emotions more precisely. This physiological signal, i.e., EEG-based method is in stark comparison to traditional non-physiological signal-based methods and has been shown to perform better. EEG closely measures the electrical activities of the brain (a nonlinear system) and hence entropy proves to be an efficient feature in extracting meaningful information from raw brain waves. This review aims to give a brief summary of various entropy-based methods used for emotion classification hence providing insights into EEG-based emotion recognition. This study also reviews the current and future trends and discusses how emotion identification using entropy as a measure to extract features, can accomplish enhanced identification when using EEG signal.


Author(s):  
Ravindra Kumar ◽  

Emotions play a powerful role in people's thinking and behaviors. Emotions act as a compulsion to take any action and can influence daily life decisions. Human facial expressions show humans share the same set of emotions. From the setting, the concept of emotion-sensing facial recognition was brought up. Humans have been working actively on computer vision algorithms, the algorithm will help determine the emotions of an individual and can determine the set of intentions accompanied by the emotions. The emotion-sensing facial expression computers are designed using data-centric skills in machine learning and can achieve their desired work by emotion identification and a set of intentions related to the emotion obtained.


2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Genevieve Patterson ◽  
Kaitlin K. Cummings ◽  
Jiwon Jung ◽  
Nana J. Okada ◽  
Nim Tottenham ◽  
...  

Abstract Background Social interaction often occurs in noisy environments with many extraneous sensory stimuli. This is especially relevant for youth with autism spectrum disorders (ASD) who commonly experience sensory over-responsivity (SOR) in addition to social challenges. However, the relationship between SOR and social difficulties is still poorly understood and thus rarely addressed in interventions. This study investigated the effect of auditory sensory distracters on neural processing of emotion identification in youth with ASD and the effects of increasing attention to social cues by priming participants with their own emotional faces. Methods While undergoing functional magnetic resonance imaging (fMRI), 30 youth with ASD and 24 typically developing (TD) age-matched controls (ages 8–17 years) identified faces as happy or angry with and without simultaneously hearing aversive environmental noises. Halfway through the task, participants also viewed videos of their own emotional faces. The relationship between parent-rated auditory SOR and brain responses during the task was also examined. Results Despite showing comparable behavioral performance on the task, ASD and TD youth demonstrated distinct patterns of neural activity. Compared to TD, ASD youth showed greater increases in amygdala, insula, and primary sensory regions when identifying emotions with noises compared to no sounds. After viewing videos of their own emotion faces, ASD youth showed greater increases in medial prefrontal cortex activation compared to TD youth. Within ASD youth, lower SOR was associated with reduced increased activity in subcortical regions after the prime and greater increased activity in the ventromedial prefrontal cortex after the prime, particularly in trials with noises. Conclusions The results suggest that the sensory environment plays an important role in how ASD youth process social information. Additionally, we demonstrated that increasing attention to relevant social cues helps ASD youth engage frontal regions involved in higher-order social cognition, a mechanism that could be targeted in interventions. Importantly, the effect of the intervention may depend on individual differences in SOR, supporting the importance of pre-screening youth for sensory challenges prior to social interventions.


Sign in / Sign up

Export Citation Format

Share Document