Analyzing Emotional Oscillatory Brain Network for Valence and Arousal-Based Emotion Recognition Using EEG Data

2019 ◽  
Vol 18 (04) ◽  
pp. 1359-1378
Author(s):  
Jianzhuo Yan ◽  
Hongzhi Kuai ◽  
Jianhui Chen ◽  
Ning Zhong

Emotion recognition is a highly noteworthy and challenging work in both cognitive science and affective computing. Currently, neurobiology studies have revealed the partially synchronous oscillating phenomenon within brain, which needs to be analyzed from oscillatory synchronization. This combination of oscillations and synchronism is worthy of further exploration to achieve inspiring learning of the emotion recognition models. In this paper, we propose a novel approach of valence and arousal-based emotion recognition using EEG data. First, we construct the emotional oscillatory brain network (EOBN) inspired by the partially synchronous oscillating phenomenon for emotional valence and arousal. And then, a coefficient of variation and Welch’s [Formula: see text]-test based feature selection method is used to identify the core pattern (cEOBN) within EOBN for different emotional dimensions. Finally, an emotional recognition model (ERM) is built by combining cEOBN-inspired information obtained in the above process and different classifiers. The proposed approach can combine oscillation and synchronization characteristics of multi-channel EEG signals for recognizing different emotional states under the valence and arousal dimensions. The cEOBN-based inspired information can effectively reduce the dimensionality of the data. The experimental results show that the previous method can be used to detect affective state at a reasonable level of accuracy.

2021 ◽  
Vol 335 ◽  
pp. 04001
Author(s):  
Didar Dadebayev ◽  
Goh Wei Wei ◽  
Tan Ee Xion

Emotion recognition, as a branch of affective computing, has attracted great attention in the last decades as it can enable more natural brain-computer interface systems. Electroencephalography (EEG) has proven to be an effective modality for emotion recognition, with which user affective states can be tracked and recorded, especially for primitive emotional events such as arousal and valence. Although brain signals have been shown to correlate with emotional states, the effectiveness of proposed models is somewhat limited. The challenge is improving accuracy, while appropriate extraction of valuable features might be a key to success. This study proposes a framework based on incorporating fractal dimension features and recursive feature elimination approach to enhance the accuracy of EEG-based emotion recognition. The fractal dimension and spectrum-based features to be extracted and used for more accurate emotional state recognition. Recursive Feature Elimination will be used as a feature selection method, whereas the classification of emotions will be performed by the Support Vector Machine (SVM) algorithm. The proposed framework will be tested with a widely used public database, and results are expected to demonstrate higher accuracy and robustness compared to other studies. The contributions of this study are primarily about the improvement of the EEG-based emotion classification accuracy. There is a potential restriction of how generic the results can be as different EEG dataset might yield different results for the same framework. Therefore, experimenting with different EEG dataset and testing alternative feature selection schemes can be very interesting for future work.


2019 ◽  
Vol 2019 ◽  
pp. 1-14 ◽  
Author(s):  
Abhishek Tiwari ◽  
Tiago H. Falk

Emotion recognition is a burgeoning field allowing for more natural human-machine interactions and interfaces. Electroencephalography (EEG) has shown to be a useful modality with which user emotional states can be measured and monitored, particularly primitives such as valence and arousal. In this paper, we propose the use of ordinal pattern analysis, also called motifs, for improved EEG-based emotion recognition. Motifs capture recurring structures in time series and are inherently robust to noise, thus are well suited for the task at hand. Several connectivity, asymmetry, and graph-theoretic features are proposed and extracted from the motifs to be used for affective state recognition. Experiments with a widely used public database are conducted, and results show the proposed features outperforming benchmark spectrum-based features, as well as other more recent nonmotif-based graph-theoretic features and amplitude modulation-based connectivity/asymmetry measures. Feature and score-level fusion suggest complementarity between the proposed and benchmark spectrum-based measures. When combined, the fused models can provide up to 9% improvement relative to benchmark features alone and up to 16% to nonmotif-based graph-theoretic features.


Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6719
Author(s):  
Longbin Jin ◽  
Eun Yi Kim

Electroencephalogram (EEG)-based emotion recognition is receiving significant attention in research on brain-computer interfaces (BCI) and health care. To recognize cross-subject emotion from EEG data accurately, a technique capable of finding an effective representation robust to the subject-specific variability associated with EEG data collection processes is necessary. In this paper, a new method to predict cross-subject emotion using time-series analysis and spatial correlation is proposed. To represent the spatial connectivity between brain regions, a channel-wise feature is proposed, which can effectively handle the correlation between all channels. The channel-wise feature is defined by a symmetric matrix, the elements of which are calculated by the Pearson correlation coefficient between two-pair channels capable of complementarily handling subject-specific variability. The channel-wise features are then fed to two-layer stacked long short-term memory (LSTM), which can extract temporal features and learn an emotional model. Extensive experiments on two publicly available datasets, the Dataset for Emotion Analysis using Physiological Signals (DEAP) and the SJTU (Shanghai Jiao Tong University) Emotion EEG Dataset (SEED), demonstrate the effectiveness of the combined use of channel-wise features and LSTM. Experimental results achieve state-of-the-art classification rates of 98.93% and 99.10% during the two-class classification of valence and arousal in DEAP, respectively, with an accuracy of 99.63% during three-class classification in SEED.


2021 ◽  
Vol 11 (6) ◽  
pp. 696
Author(s):  
Naveen Masood ◽  
Humera Farooq

Most electroencephalography (EEG)-based emotion recognition systems rely on a single stimulus to evoke emotions. These systems make use of videos, sounds, and images as stimuli. Few studies have been found for self-induced emotions. The question “if different stimulus presentation paradigms for same emotion, produce any subject and stimulus independent neural correlates” remains unanswered. Furthermore, we found that there are publicly available datasets that are used in a large number of studies targeting EEG-based human emotional state recognition. Since one of the major concerns and contributions of this work is towards classifying emotions while subjects experience different stimulus-presentation paradigms, we need to perform new experiments. This paper presents a novel experimental study that recorded EEG data for three different human emotional states evoked with four different stimuli presentation paradigms. Fear, neutral, and joy have been considered as three emotional states. In this work, features were extracted with common spatial pattern (CSP) from recorded EEG data and classified through linear discriminant analysis (LDA). The considered emotion-evoking paradigms included emotional imagery, pictures, sounds, and audio–video movie clips. Experiments were conducted with twenty-five participants. Classification performance in different paradigms was evaluated, considering different spectral bands. With a few exceptions, all paradigms showed the best emotion recognition for higher frequency spectral ranges. Interestingly, joy emotions were classified more strongly as compared to fear. The average neural patterns for fear vs. joy emotional states are presented with topographical maps based on spatial filters obtained with CSP for averaged band power changes for all four paradigms. With respect to the spectral bands, beta and alpha oscillation responses produced the highest number of significant results for the paradigms under consideration. With respect to brain region, the frontal lobe produced the most significant results irrespective of paradigms and spectral bands. The temporal site also played an effective role in generating statistically significant findings. To the best of our knowledge, no study has been conducted for EEG emotion recognition while considering four different stimuli paradigms. This work provides a good contribution towards designing EEG-based system for human emotion recognition that could work effectively in different real-time scenarios.


2021 ◽  
pp. 1-17
Author(s):  
Naveen Masood ◽  
Humera Farooq

Most of the electroencephalography (EEG) based emotion recognition systems rely on single stimulus to evoke emotions. EEG data is mostly recorded with higher number of electrodes that can lead to data redundancy and longer experimental setup time. The question “whether the configuration with lesser number of electrodes is common amongst different stimuli presentation paradigms” remains unanswered. There are publicly available datasets for EEG based human emotional states recognition. Since this work is focused towards classifying emotions while subjects are experiencing different stimuli, therefore we need to perform new experiments. Keeping aforementioned issues in consideration, this work presents a novel experimental study that records EEG data for three different human emotional states evoked with four different stimuli presentation paradigms. A methodology based on iterative Genetic Algorithm in combination with majority voting has been used to achieve configuration with reduced number of EEG electrodes keeping in consideration minimum loss of classification accuracy. The results obtained are comparable with recent studies. Stimulus independent configurations with lesser number of electrodes lead towards low computational complexity as well as reduced set up time for future EEG based smart systems for emotions recognition


2020 ◽  
Vol 13 (4) ◽  
pp. 4-24 ◽  
Author(s):  
V.A. Barabanschikov ◽  
E.V. Suvorova

The article is devoted to the results of approbation of the Geneva Emotion Recognition Test (GERT), a Swiss method for assessing dynamic emotional states, on Russian sample. Identification accuracy and the categorical fields’ structure of emotional expressions of a “living” face are analysed. Similarities and differences in the perception of affective groups of dynamic emotions in the Russian and Swiss samples are considered. A number of patterns of recognition of multi-modal expressions with changes in valence and arousal of emotions are described. Differences in the perception of dynamics and statics of emotional expressions are revealed. GERT method confirmed it’s high potential for solving a wide range of academic and applied problems.


Author(s):  
Benjamin Aribisala ◽  
Obaro Olori ◽  
Patrick Owate

Introduction: Emotion plays a key role in our daily life and work, especially in decision making, as people's moods can influence their mode of communication, behaviour or productivity. Emotion recognition has attracted some research works and medical imaging technology offers tools for emotion classification. Aims: The aim of this work is to develop a machine learning technique for recognizing emotion based on Electroencephalogram (EEG) data Materials and Methods: Experimentation was based on a publicly available EEG Dataset for Emotion Analysis using Physiological (DEAP). The data comprises of EEG signals acquired from thirty two adults while watching forty 40 different musical video clips of one minute each. Participants rated each video in terms of four emotional states, namely, arousal, valence, like/dislike and dominance. We extracted some features from the dataset using Discrete Wavelet Transforms to extract wavelet energy, wavelet entropy, and standard deviation. We then classified the extracted features into four emotional states, namely, High Valence/High Arousal, High Valance/Low Arousal, Low Valence/High Arousal, and Low Valence/Low Arousal using Ensemble Bagged Trees. Results: Ensemble Bagged Trees gave sensitivity, specificity, and accuracy of 97.54%, 99.21%, and 97.80% respectively. Support Vector Machine and Ensemble Boosted Tree gave similar results. Conclusion: Our results showed that machine learning classification of emotion using EEG data is very promising. This can help in the treatment of patients, especially those with expression problems like Amyotrophic Lateral Sclerosis which is a muscle disease, the real emotional state of patients will help doctors to provide appropriate medical care. Keywords: Electroencephalogram, Emotions Recognition, Ensemble Classification, Ensemble Bagged Trees, Machine Learning


2021 ◽  
Vol 11 (11) ◽  
pp. 1392
Author(s):  
Yue Hua ◽  
Xiaolong Zhong ◽  
Bingxue Zhang ◽  
Zhong Yin ◽  
Jianhua Zhang

Affective computing systems can decode cortical activities to facilitate emotional human–computer interaction. However, personalities exist in neurophysiological responses among different users of the brain–computer interface leads to a difficulty for designing a generic emotion recognizer that is adaptable to a novel individual. It thus brings an obstacle to achieve cross-subject emotion recognition (ER). To tackle this issue, in this study we propose a novel feature selection method, manifold feature fusion and dynamical feature selection (MF-DFS), under transfer learning principle to determine generalizable features that are stably sensitive to emotional variations. The MF-DFS framework takes the advantages of local geometrical information feature selection, domain adaptation based manifold learning, and dynamical feature selection to enhance the accuracy of the ER system. Based on three public databases, DEAP, MAHNOB-HCI and SEED, the performance of the MF-DFS is validated according to the leave-one-subject-out paradigm under two types of electroencephalography features. By defining three emotional classes of each affective dimension, the accuracy of the MF-DFS-based ER classifier is achieved at 0.50–0.48 (DEAP) and 0.46–0.50 (MAHNOBHCI) for arousal and valence emotional dimensions, respectively. For the SEED database, it achieves 0.40 for the valence dimension. The corresponding accuracy is significantly superior to several classical feature selection methods on multiple machine learning models.


2021 ◽  
Vol 25 (3) ◽  
pp. 1717-1730
Author(s):  
Esma Mansouri-Benssassi ◽  
Juan Ye

AbstractEmotion recognition through facial expression and non-verbal speech represents an important area in affective computing. They have been extensively studied from classical feature extraction techniques to more recent deep learning approaches. However, most of these approaches face two major challenges: (1) robustness—in the face of degradation such as noise, can a model still make correct predictions? and (2) cross-dataset generalisation—when a model is trained on one dataset, can it be used to make inference on another dataset?. To directly address these challenges, we first propose the application of a spiking neural network (SNN) in predicting emotional states based on facial expression and speech data, then investigate, and compare their accuracy when facing data degradation or unseen new input. We evaluate our approach on third-party, publicly available datasets and compare to the state-of-the-art techniques. Our approach demonstrates robustness to noise, where it achieves an accuracy of 56.2% for facial expression recognition (FER) compared to 22.64% and 14.10% for CNN and SVM, respectively, when input images are degraded with the noise intensity of 0.5, and the highest accuracy of 74.3% for speech emotion recognition (SER) compared to 21.95% of CNN and 14.75% for SVM when audio white noise is applied. For generalisation, our approach achieves consistently high accuracy of 89% for FER and 70% for SER in cross-dataset evaluation and suggests that it can learn more effective feature representations, which lead to good generalisation of facial features and vocal characteristics across subjects.


Author(s):  
Vicente Ávila-Gandía ◽  
Francisco Alarcón ◽  
José C. Perales ◽  
F. Javier López-Román ◽  
Antonio J. Luque-Rubia ◽  
...  

Endurance physical exercise is accompanied by subjective perceptions of exertion (reported perceived exertion, RPE), emotional valence, and arousal. These constructs have been hypothesized to serve as the basis for the exerciser to make decisions regarding when to stop, how to regulate pace, and whether or not to exercise again. In dual physical-cognitive tasks, the mental (executive) workload generated by the cognitive task has been shown to influence these perceptions, in ways that could also influence exercise-related decisions. In the present work, we intend to replicate and extend previous findings that manipulating the amount of executive load imposed by a mental task, performed concomitantly with a submaximal cycling session, influenced emotional states but not perceived exertion. Participants (experienced triathletes) were asked to perform a submaximal cycling task in two conditions with different executive demands (a two-back version of the n-back task vs. oddball) but equated in external physical load. Results showed that the higher executive load condition elicited more arousal and less positive valence than the lower load condition. However, both conditions did not differ in RPE. This experimental dissociation suggests that perceived exertion and its emotional correlates are not interchangeable, which opens the possibility that they could play different roles in exercise-related decision-making.


Sign in / Sign up

Export Citation Format

Share Document