A NOVEL METHOD OF EEG-BASED EMOTION RECOGNITION USING NONLINEAR FEATURES VARIABILITY AND DEMPSTER–SHAFER THEORY

2018 ◽  
Vol 30 (04) ◽  
pp. 1850026 ◽  
Author(s):  
Morteza Zangeneh Soroush ◽  
Keivan Maghooli ◽  
Seyed Kamaledin Setarehdan ◽  
Ali Motie Nasrabadi

These days, emotion recognition has been receiving more attention due to the growth of the brain–computer interfaces (systems) (BCIs). Moreover, estimating emotions is widely used in different aspects such as psychology, neuroscience, entertainment, e-learning, etc. This paper aims to classify emotions through EEG signals. When it comes to emotion recognition, participants’ opinions toward induced emotions are really case-dependent and thus corresponding labels might be imprecise and uncertain. Furthermore, it is acceptable that mixture classifiers lead to higher accuracy (ACE) and lower uncertainty. This paper, introduces new methods, including setting time intervals to process EEG signals, extracting relative values of nonlinear features and classifying them through Dempster–Shafer theory (DST) of evidence method. In this work, we used EEG signals which are taken from a very reliable database and the extracted features are classified by DST in order to reduce uncertainty and consequently achieve better results. First, time windows are determined based on signal complexity. Then, nonlinear features are extracted. Actually, this paper suggests feature variability through time intervals instead of absolute values of features and discriminant features are selected using genetic algorithm (GA). Finally, data is fed in the classification process and different classifiers are combined through DST. 10-fold cross-validation is applied and the results are extracted and compared with some basic classifiers. We managed to achieve high classification performance in terms of emotion recognition [Formula: see text]. Results prove that EEG signals can reflect emotional responses of the brain and the proposed method is effective which gives considerably precise estimation of emotions.

2019 ◽  
Vol 127 ◽  
pp. 34-45 ◽  
Author(s):  
Morteza Zangeneh Soroush ◽  
Keivan Maghooli ◽  
Seyed Kamaledin Setarehdan ◽  
Ali Motie Nasrabadi

2021 ◽  
Vol 38 (6) ◽  
pp. 1689-1698
Author(s):  
Suat Toraman ◽  
Ömer Osman Dursun

Human emotion recognition with machine learning methods through electroencephalographic (EEG) signals has become a highly interesting subject for researchers. Although it is simple to define emotions that can be expressed physically such as speech, facial expressions, and gestures, it is more difficult to define psychological emotions that are expressed internally. The most important stimuli in revealing inner emotions are aural and visual stimuli. In this study, EEG signals using both aural and visual stimuli were examined and emotions were evaluated in both binary and multi-class emotion recognitions models. A general emotion recognition model was proposed for non-subject-based classification. Unlike in previous studies, a subject-based testing was performed for the first time in the literature. Capsule Networks, a new neural network model, has been developed for binary and multi-class emotion recognition. In the proposed method, a novel fusion strategy was introduced for binary-class emotion recognition and the model was tested using the GAMEEMO dataset. Binary-class emotion recognition achieved a classification accuracy which was 10% better than the classification performance achieved in other studies in the literature. Based on these findings, we suggest that the proposed method will bring a different perspective to emotion recognition.


2012 ◽  
Vol 22 (03) ◽  
pp. 1250011 ◽  
Author(s):  
U. RAJENDRA ACHARYA ◽  
S. VINITHA SREE ◽  
SUBHAGATA CHATTOPADHYAY ◽  
JASJIT S. SURI

Electroencephalogram (EEG) signals, which record the electrical activity in the brain, are useful for assessing the mental state of a person. Since these signals are nonlinear and non-stationary in nature, it is very difficult to decipher the useful information from them using conventional statistical and frequency domain methods. Hence, the application of nonlinear time series analysis to EEG signals could be useful to study the dynamical nature and variability of the brain signals. In this paper, we propose a Computer Aided Diagnostic (CAD) technique for the automated identification of normal and alcoholic EEG signals using nonlinear features. We first extract nonlinear features such as Approximate Entropy (ApEn), Largest Lyapunov Exponent (LLE), Sample Entropy (SampEn), and four other Higher Order Spectra (HOS) features, and then use them to train Support Vector Machine (SVM) classifier of varying kernel functions: 1st, 2nd, and 3rd order polynomials and a Radial basis function (RBF) kernel. Our results indicate that these nonlinear measures are good discriminators of normal and alcoholic EEG signals. The SVM classifier with a polynomial kernel of order 1 could distinguish the two classes with an accuracy of 91.7%, sensitivity of 90% and specificity of 93.3%. As a pre-analysis step, the EEG signals were tested for nonlinearity using surrogate data analysis and we found that there was a significant difference in the LLE measure of the actual data and the surrogate data.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Ali Torabi ◽  
Mohammad Reza Daliri

Abstract Background Epilepsy is a neurological disorder from which almost 50 million people have been suffering. These statistics indicate the importance of epilepsy diagnosis. Electroencephalogram (EEG) signals analysis is one of the most common methods for epilepsy characterization; hence, various strategies were applied to classify epileptic EEGs. Methods In this paper, four different nonlinear features such as Fractal dimensions including Higuchi method (HFD) and Katz method (KFD), Hurst exponent, and L-Z complexity measure were extracted from EEGs and their frequency sub-bands. The features were ranked later by implementing Relieff algorithm. The ranked features were applied sequentially to three different classifiers (MLPNN, Linear SVM, and RBF SVM). Results According to the dataset used for this study, there are five classification problems named ABCD/E, AB/CD/E, A/D/E, A/E, and D/E. In all cases, MLPNN was the most accurate classifier. Its performances for mentioned classification problems were 99.91%, 98.19%, 98.5%, 100% and 99.84%, respectively. Conclusion The results demonstrate that KFD is the highest-ranking feature; In addition, beta and theta sub-bands are the most important frequency bands because, for all cases, the top features were KFDs extracted from beta and theta sub-bands. Moreover, high levels of accuracy have been obtained just by using these two features which reduce the complexity of the classification.


2018 ◽  
Vol 14 (1) ◽  
Author(s):  
Morteza Zangeneh Soroush ◽  
Keivan Maghooli ◽  
Seyed Kamaledin Setarehdan ◽  
Ali Motie Nasrabadi

2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Ayan Seal ◽  
Puthi Prem Nivesh Reddy ◽  
Pingali Chaithanya ◽  
Arramada Meghana ◽  
Kamireddy Jahnavi ◽  
...  

Human emotion recognition has been a major field of research in the last decades owing to its noteworthy academic and industrial applications. However, most of the state-of-the-art methods identified emotions after analyzing facial images. Emotion recognition using electroencephalogram (EEG) signals has got less attention. However, the advantage of using EEG signals is that it can capture real emotion. However, very few EEG signals databases are publicly available for affective computing. In this work, we present a database consisting of EEG signals of 44 volunteers. Twenty-three out of forty-four are females. A 32 channels CLARITY EEG traveler sensor is used to record four emotional states namely, happy, fear, sad, and neutral of subjects by showing 12 videos. So, 3 video files are devoted to each emotion. Participants are mapped with the emotion that they had felt after watching each video. The recorded EEG signals are considered further to classify four types of emotions based on discrete wavelet transform and extreme learning machine (ELM) for reporting the initial benchmark classification performance. The ELM algorithm is used for channel selection followed by subband selection. The proposed method performs the best when features are captured from the gamma subband of the FP1-F7 channel with 94.72% accuracy. The presented database would be available to the researchers for affective recognition applications.


2022 ◽  
Vol 12 ◽  
Author(s):  
Mingxing Liu

This paper presents an in-depth study and analysis of the emotional classification of EEG neurofeedback interactive electronic music compositions using a multi-brain collaborative brain-computer interface (BCI). Based on previous research, this paper explores the design and performance of sound visualization in an interactive format from the perspective of visual performance design and the psychology of participating users with the help of knowledge from various disciplines such as psychology, acoustics, aesthetics, neurophysiology, and computer science. This paper proposes a specific mapping model for the conversion of sound to visual expression based on people’s perception and aesthetics of sound based on the phenomenon of audiovisual association, which provides a theoretical basis for the subsequent research. Based on the mapping transformation pattern between audio and visual, this paper investigates the realization path of interactive sound visualization, the visual expression form and its formal composition, and the aesthetic style, and forms a design expression method for the visualization of interactive sound, to benefit the practice of interactive sound visualization. In response to the problem of neglecting the real-time and dynamic nature of the brain in traditional brain network research, dynamic brain networks proposed for analyzing the EEG signals induced by long-time music appreciation. During prolonged music appreciation, the connectivity of the brain changes continuously. We used mutual information on different frequency bands of EEG signals to construct dynamic brain networks, observe changes in brain networks over time and use them for emotion recognition. We used the brain network for emotion classification and achieved an emotion recognition rate of 67.3% under four classifications, exceeding the highest recognition rate available.


2021 ◽  
Author(s):  
Gang Liu ◽  
Jing Wang

<div><div> <p><a></a></p><div> <p><a></a><a><i>Objective. </i></a>Modeling the brain as a white box is vital for investigating the brain. However, the physical properties of the human brain are unclear. Therefore, BCI algorithms using EEG signals are generally a data-driven approach and generate a black- or gray-box model. This paper presents the first EEG-based BCI algorithm (EEGBCI using Gang neurons, EEGG) decomposing the brain into some simple components with physical meaning and integrating recognition and analysis of brain activity. </p> <p><i>Approach. </i>Independent and interactive components of neurons or brain regions can fully describe the brain. This paper constructed a relationship frame based on the independent and interactive compositions for intention recognition and analysis using a novel dendrite module of Gang neurons. A total of 4,906 EEG data of left- and right-hand motor imagery(MI) from 26 subjects were obtained from GigaDB. Firstly, this paper explored EEGG’s classification performance by cross-subject accuracy. Secondly, this paper transformed the trained EEGG model into a relation spectrum expressing independent and interactive components of brain regions. Then, the relation spectrum was verified using the known ERD/ERS phenomenon. Finally, this paper explored the previously unreachable further BCIbased analysis of the brain. </p> <p><i>Main results. </i>(1) EEGG was more robust than typical “CSP+” algorithms for the poorquality data. (2) The relation spectrum showed the known ERD/ERS phenomenon. (3) Interestingly, EEGG showed that interactive components between brain regions suppressed ERD/ERS effects on classification. This means that generating fine hand intention needs more centralized activation in the brain. </p> <p><i>Significance. </i>EEGG decomposed the biological EEG-intention system of this paper into the relation spectrum inheriting the Taylor series (<i>in analogy with the data-driven but human-readable Fourier transform and frequency spectrum</i>), which offers a novel frame for analysis of the brain.</p> </div> </div></div><div><p></p></div>


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Yu Chen ◽  
Rui Chang ◽  
Jifeng Guo

In recent years, with the continuous development of artificial intelligence and brain-computer interface technology, emotion recognition based on physiological signals, especially, electroencephalogram (EEG) signals, has become a popular research topic and attracted wide attention. However, how to extract effective features from EEG signals and accurately recognize them by classifiers have also become an increasingly important task. Therefore, in this paper, we propose an emotion recognition method of EEG signals based on the ensemble learning method, AdaBoost. First, we consider the time domain, time-frequency domain, and nonlinear features related to emotion, extract them from the preprocessed EEG signals, and fuse the features into an eigenvector matrix. Then, the linear discriminant analysis feature selection method is used to reduce the dimensionality of the features. Next, we use the optimized feature sets and train a classifier based on the ensemble learning method, AdaBoost, for binary classification. Finally, the proposed method has been tested in the DEAP data set on four emotional dimensions: valence, arousal, dominance, and liking. The proposed method is proved to be effective in emotion recognition, and the best average accuracy rate can reach up to 88.70% on the dominance dimension. Compared with other existing methods, the performance of the proposed method is significantly improved.


Sign in / Sign up

Export Citation Format

Share Document