scholarly journals EEG based emotion recognition using entropy features and Bayesian optimized random forest

2021 ◽  
Vol 7 (2) ◽  
pp. 767-770
Author(s):  
Himanshu Kumar ◽  
Nagarajan Ganapathy ◽  
Subha D. Puthankattil ◽  
Ramakrishnan Swaminathan

Abstract Electroencephalography (EEG) based emotion recognition is a widely preferred technique due to its noninvasiveness. Also, frontal region-specific EEG signals have been associated with emotional processing. Feature reductionbased optimized machine learning methods can improve the automated analysis of frontal EEG signals. In this work, an attempt is made to classify emotional states using entropybased features and Bayesian optimized random forest. For this, the EEG signals of prefrontal and frontal regions (Fp1, Fp2, Fz, F3, and F4) are obtained from an online public database. The signals are decomposed into five frequency bands, namely delta (1-4 Hz), theta (4-8 Hz), alpha (8-13 Hz), beta (14-30 Hz), and gamma (30-45 Hz). Three entropy features, namely Dispersion Entropy (DE), Sample Entropy (SE), and Permutation Entropy (PE), are extracted and are dimensionally reduced using Principal Component Analysis (PCA). Further, the reduced features are applied to the Bayesian optimized random forest for the classification. The results show that the DE in the gamma band and SE in the alpha band exhibit a statistically significant (p < 0.05) difference for classifying arousal and valence emotional states. The selected features from PCA yield an F-measure of 73.24% for arousal and 46.98% for valence emotional states. Further, the combination of all features yields a higher F-measure of 48.13% for valence emotional states. The proposed method is capable of handling multicomponent variations of frontal region-specific EEG signals. Particularly the combination of selected features could be useful to characterize arousal and valence emotional states.

Author(s):  
Himanshu Kumar ◽  
Nagarajan Ganapathy ◽  
Subha D. Puthankattil ◽  
Ramakrishnan Swaminathan

Emotions are essential for the intellectual ability of human beings defined by perception, concentration, and actions. Electroencephalogram (EEG) responses have been studied in different lobes of the brain for emotion recognition. An attempt has been made in this work to identify emotional states using time-domain features, and probabilistic random forest based decision fusion. The EEG signals are collected for this from an online public database. The prefrontal and frontal electrodes, namely Fp1, Fp2, F3, F4, and Fz are considered. Eleven features are extracted from each electrode, and subjected to a probabilistic random forest. The probabilities are employed to Dempster-Shafer’s (D-S) based evidence theory for electrode selection using decision fusion. Results demonstrate that the method suggested is capable of classifying emotional states. The decision fusion based electrode selection appears to be most accurate (arousal F-measure = 77.9%) in classifying the emotional states. The combination of Fp2, F3, and F4 electrodes yields higher accuracy for characterizing arousal (65.1%) and valence (57.9%) dimension. Thus, the proposed method can be used to select the critical electrodes for the classification of emotions.


Entropy ◽  
2019 ◽  
Vol 21 (6) ◽  
pp. 609 ◽  
Author(s):  
Gao ◽  
Cui ◽  
Wan ◽  
Gu

Exploring the manifestation of emotion in electroencephalogram (EEG) signals is helpful for improving the accuracy of emotion recognition. This paper introduced the novel features based on the multiscale information analysis (MIA) of EEG signals for distinguishing emotional states in four dimensions based on Russell's circumplex model. The algorithms were applied to extract features on the DEAP database, which included multiscale EEG complexity index in the time domain, and ensemble empirical mode decomposition enhanced energy and fuzzy entropy in the frequency domain. The support vector machine and cross validation method were applied to assess classification accuracy. The classification performance of MIA methods (accuracy = 62.01%, precision = 62.03%, recall/sensitivity = 60.51%, and specificity = 82.80%) was much higher than classical methods (accuracy = 43.98%, precision = 43.81%, recall/sensitivity = 41.86%, and specificity = 70.50%), which extracted features contain similar energy based on a discrete wavelet transform, fractal dimension, and sample entropy. In this study, we found that emotion recognition is more associated with high frequency oscillations (51–100Hz) of EEG signals rather than low frequency oscillations (0.3–49Hz), and the significance of the frontal and temporal regions are higher than other regions. Such information has predictive power and may provide more insights into analyzing the multiscale information of high frequency oscillations in EEG signals.


2020 ◽  
Vol 2020 ◽  
pp. 1-19
Author(s):  
Nazmi Sofian Suhaimi ◽  
James Mountstephens ◽  
Jason Teo

Emotions are fundamental for human beings and play an important role in human cognition. Emotion is commonly associated with logical decision making, perception, human interaction, and to a certain extent, human intelligence itself. With the growing interest of the research community towards establishing some meaningful “emotional” interactions between humans and computers, the need for reliable and deployable solutions for the identification of human emotional states is required. Recent developments in using electroencephalography (EEG) for emotion recognition have garnered strong interest from the research community as the latest developments in consumer-grade wearable EEG solutions can provide a cheap, portable, and simple solution for identifying emotions. Since the last comprehensive review was conducted back from the years 2009 to 2016, this paper will update on the current progress of emotion recognition using EEG signals from 2016 to 2019. The focus on this state-of-the-art review focuses on the elements of emotion stimuli type and presentation approach, study size, EEG hardware, machine learning classifiers, and classification approach. From this state-of-the-art review, we suggest several future research opportunities including proposing a different approach in presenting the stimuli in the form of virtual reality (VR). To this end, an additional section devoted specifically to reviewing only VR studies within this research domain is presented as the motivation for this proposed new approach using VR as the stimuli presentation device. This review paper is intended to be useful for the research community working on emotion recognition using EEG signals as well as for those who are venturing into this field of research.


2020 ◽  
Vol 49 (3) ◽  
pp. 285-298
Author(s):  
Jian Zhang ◽  
Yihou Min

Human Emotion Recognition is of vital importance to realize human-computer interaction (HCI), while multichannel electroencephalogram (EEG) signals gradually replace other physiological signals and become the main basis of emotional recognition research with the development of brain-computer interface (BCI). However, the accuracy of emotional classification based on EEG signals under video stimulation is not stable, which may be related to the characteristics of  EEG signals before receiving stimulation. In this study, we extract the change of Differential Entropy (DE) before and after stimulation based on wavelet packet transform (WPT) to identify individual emotional state. Using the EEG emotion database DEAP, we divide the experimental EEG data in the database equally into 15 sets and extract their differential entropy on the basis of WPT. Then we calculate value of DE change of each separated EEG signal set. Finally, we divide the emotion into four categories in the two-dimensional valence-arousal emotional space by combining it with the integrated algorithm, Random Forest (RF). The simulation results show that the WPT-RF model established by this method greatly improves the recognition rate of EEG signal, with an average classification accuracy of 87.3%. In addition, we use WPT-RF model to train individual subjects, and the classification accuracy reached 97.7%.


Computers ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 95
Author(s):  
Rania Alhalaseh ◽  
Suzan Alasasfeh

Many scientific studies have been concerned with building an automatic system to recognize emotions, and building such systems usually relies on brain signals. These studies have shown that brain signals can be used to classify many emotional states. This process is considered difficult, especially since the brain’s signals are not stable. Human emotions are generated as a result of reactions to different emotional states, which affect brain signals. Thus, the performance of emotion recognition systems by brain signals depends on the efficiency of the algorithms used to extract features, the feature selection algorithm, and the classification process. Recently, the study of electroencephalography (EEG) signaling has received much attention due to the availability of several standard databases, especially since brain signal recording devices have become available in the market, including wireless ones, at reasonable prices. This work aims to present an automated model for identifying emotions based on EEG signals. The proposed model focuses on creating an effective method that combines the basic stages of EEG signal handling and feature extraction. Different from previous studies, the main contribution of this work relies in using empirical mode decomposition/intrinsic mode functions (EMD/IMF) and variational mode decomposition (VMD) for signal processing purposes. Despite the fact that EMD/IMFs and VMD methods are widely used in biomedical and disease-related studies, they are not commonly utilized in emotion recognition. In other words, the methods used in the signal processing stage in this work are different from the methods used in literature. After the signal processing stage, namely in the feature extraction stage, two well-known technologies were used: entropy and Higuchi’s fractal dimension (HFD). Finally, in the classification stage, four classification methods were used—naïve Bayes, k-nearest neighbor (k-NN), convolutional neural network (CNN), and decision tree (DT)—for classifying emotional states. To evaluate the performance of our proposed model, experiments were applied to a common database called DEAP based on many evaluation models, including accuracy, specificity, and sensitivity. The experiments showed the efficiency of the proposed method; a 95.20% accuracy was achieved using the CNN-based method.


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Mehmet Akif Ozdemir ◽  
Murside Degirmenci ◽  
Elif Izci ◽  
Aydin Akan

AbstractThe emotional state of people plays a key role in physiological and behavioral human interaction. Emotional state analysis entails many fields such as neuroscience, cognitive sciences, and biomedical engineering because the parameters of interest contain the complex neuronal activities of the brain. Electroencephalogram (EEG) signals are processed to communicate brain signals with external systems and make predictions over emotional states. This paper proposes a novel method for emotion recognition based on deep convolutional neural networks (CNNs) that are used to classify Valence, Arousal, Dominance, and Liking emotional states. Hence, a novel approach is proposed for emotion recognition with time series of multi-channel EEG signals from a Database for Emotion Analysis and Using Physiological Signals (DEAP). We propose a new approach to emotional state estimation utilizing CNN-based classification of multi-spectral topology images obtained from EEG signals. In contrast to most of the EEG-based approaches that eliminate spatial information of EEG signals, converting EEG signals into a sequence of multi-spectral topology images, temporal, spectral, and spatial information of EEG signals are preserved. The deep recurrent convolutional network is trained to learn important representations from a sequence of three-channel topographical images. We have achieved test accuracy of 90.62% for negative and positive Valence, 86.13% for high and low Arousal, 88.48% for high and low Dominance, and finally 86.23% for like–unlike. The evaluations of this method on emotion recognition problem revealed significant improvements in the classification accuracy when compared with other studies using deep neural networks (DNNs) and one-dimensional CNNs.


2021 ◽  
Vol 19 (6) ◽  
pp. 584-602
Author(s):  
Lucian Jose Gonçales ◽  
Kleinner Farias ◽  
Lucas Kupssinskü ◽  
Matheus Segalotto

EEG signals are a relevant indicator for measuring aspects related to human factors in Software Engineering. EEG is used in software engineering to train machine learning techniques for a wide range of applications, including classifying task difficulty, and developers’ level of experience. The EEG signal contains noise such as abnormal readings, electrical interference, and eye movements, which are usually not of interest to the analysis, and therefore contribute to the lack of precision of the machine learning techniques. However, research in software engineering has not evidenced the effectiveness when applying these filters on EEG signals. The objective of this work is to analyze the effectiveness of filters on EEG signals in the software engineering context. As literature did not focus on the classification of developers’ code comprehension, this study focuses on the analysis of the effectiveness of applying EEG filters for training a machine learning technique to classify developers' code comprehension. A Random Forest (RF) machine learning technique was trained with filtered EEG signals to classify the developers' code comprehension. This study also trained another random forest classifier with unfiltered EEG data. Both models were trained using 10-fold cross-validation. This work measures the classifiers' effectiveness using the f-measure metric. This work used the t-test, Wilcoxon, and U Mann Whitney to analyze the difference in the effectiveness measures (f-measure) between the classifier trained with filtered EEG and the classifier trained with unfiltered EEG. The tests pointed out that there is a significant difference after applying EEG filters to classify developers' code comprehension with the random forest classifier. The conclusion is that the use of EEG filters significantly improves the effectivity to classify code comprehension using the random forest technique.


2017 ◽  
Vol 2017 ◽  
pp. 1-9 ◽  
Author(s):  
Ning Zhuang ◽  
Ying Zeng ◽  
Li Tong ◽  
Chi Zhang ◽  
Hanming Zhang ◽  
...  

This paper introduces a method for feature extraction and emotion recognition based on empirical mode decomposition (EMD). By using EMD, EEG signals are decomposed into Intrinsic Mode Functions (IMFs) automatically. Multidimensional information of IMF is utilized as features, the first difference of time series, the first difference of phase, and the normalized energy. The performance of the proposed method is verified on a publicly available emotional database. The results show that the three features are effective for emotion recognition. The role of each IMF is inquired and we find that high frequency component IMF1 has significant effect on different emotional states detection. The informative electrodes based on EMD strategy are analyzed. In addition, the classification accuracy of the proposed method is compared with several classical techniques, including fractal dimension (FD), sample entropy, differential entropy, and discrete wavelet transform (DWT). Experiment results on DEAP datasets demonstrate that our method can improve emotion recognition performance.


Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5218 ◽  
Author(s):  
Muhammad Adeel Asghar ◽  
Muhammad Jamil Khan ◽  
Fawad ◽  
Yasar Amin ◽  
Muhammad Rizwan ◽  
...  

Much attention has been paid to the recognition of human emotions with the help of electroencephalogram (EEG) signals based on machine learning technology. Recognizing emotions is a challenging task due to the non-linear property of the EEG signal. This paper presents an advanced signal processing method using the deep neural network (DNN) for emotion recognition based on EEG signals. The spectral and temporal components of the raw EEG signal are first retained in the 2D Spectrogram before the extraction of features. The pre-trained AlexNet model is used to extract the raw features from the 2D Spectrogram for each channel. To reduce the feature dimensionality, spatial, and temporal based, bag of deep features (BoDF) model is proposed. A series of vocabularies consisting of 10 cluster centers of each class is calculated using the k-means cluster algorithm. Lastly, the emotion of each subject is represented using the histogram of the vocabulary set collected from the raw-feature of a single channel. Features extracted from the proposed BoDF model have considerably smaller dimensions. The proposed model achieves better classification accuracy compared to the recently reported work when validated on SJTU SEED and DEAP data sets. For optimal classification performance, we use a support vector machine (SVM) and k-nearest neighbor (k-NN) to classify the extracted features for the different emotional states of the two data sets. The BoDF model achieves 93.8% accuracy in the SEED data set and 77.4% accuracy in the DEAP data set, which is more accurate compared to other state-of-the-art methods of human emotion recognition.


PeerJ ◽  
2018 ◽  
Vol 6 ◽  
pp. e4817 ◽  
Author(s):  
Quan Liu ◽  
Li Ma ◽  
Shou-Zen Fan ◽  
Maysam F. Abbod ◽  
Jiann-Shing Shieh

Estimating the depth of anaesthesia (DoA) in operations has always been a challenging issue due to the underlying complexity of the brain mechanisms. Electroencephalogram (EEG) signals are undoubtedly the most widely used signals for measuring DoA. In this paper, a novel EEG-based index is proposed to evaluate DoA for 24 patients receiving general anaesthesia with different levels of unconsciousness. Sample Entropy (SampEn) algorithm was utilised in order to acquire the chaotic features of the signals. After calculating the SampEn from the EEG signals, Random Forest was utilised for developing learning regression models with Bispectral index (BIS) as the target. Correlation coefficient, mean absolute error, and area under the curve (AUC) were used to verify the perioperative performance of the proposed method. Validation comparisons with typical nonstationary signal analysis methods (i.e., recurrence analysis and permutation entropy) and regression methods (i.e., neural network and support vector machine) were conducted. To further verify the accuracy and validity of the proposed methodology, the data is divided into four unconsciousness-level groups on the basis of BIS levels. Subsequently, analysis of variance (ANOVA) was applied to the corresponding index (i.e., regression output). Results indicate that the correlation coefficient improved to 0.72 ± 0.09 after filtering and to 0.90 ± 0.05 after regression from the initial values of 0.51 ± 0.17. Similarly, the final mean absolute error dramatically declined to 5.22 ± 2.12. In addition, the ultimate AUC increased to 0.98 ± 0.02, and the ANOVA analysis indicates that each of the four groups of different anaesthetic levels demonstrated significant difference from the nearest levels. Furthermore, the Random Forest output was extensively linear in relation to BIS, thus with better DoA prediction accuracy. In conclusion, the proposed method provides a concrete basis for monitoring patients’ anaesthetic level during surgeries.


Sign in / Sign up

Export Citation Format

Share Document