Emotion Classification Based On EEG Signals In a Stable Environment

Author(s):  
Prahallad Kumar Sahu ◽  
Ramesh kumar Sahoo ◽  
Nilambar Sethi ◽  
Srinivas Sethi
2019 ◽  
Vol 9 (11) ◽  
pp. 326 ◽  
Author(s):  
Hong Zeng ◽  
Zhenhua Wu ◽  
Jiaming Zhang ◽  
Chen Yang ◽  
Hua Zhang ◽  
...  

Deep learning (DL) methods have been used increasingly widely, such as in the fields of speech and image recognition. However, how to design an appropriate DL model to accurately and efficiently classify electroencephalogram (EEG) signals is still a challenge, mainly because EEG signals are characterized by significant differences between two different subjects or vary over time within a single subject, non-stability, strong randomness, low signal-to-noise ratio. SincNet is an efficient classifier for speaker recognition, but it has some drawbacks in dealing with EEG signals classification. In this paper, we improve and propose a SincNet-based classifier, SincNet-R, which consists of three convolutional layers, and three deep neural network (DNN) layers. We then make use of SincNet-R to test the classification accuracy and robustness by emotional EEG signals. The comparable results with original SincNet model and other traditional classifiers such as CNN, LSTM and SVM, show that our proposed SincNet-R model has higher classification accuracy and better algorithm robustness.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5135
Author(s):  
Ngoc-Dau Mai ◽  
Boon-Giin Lee ◽  
Wan-Young Chung

In this research, we develop an affective computing method based on machine learning for emotion recognition using a wireless protocol and a wearable electroencephalography (EEG) custom-designed device. The system collects EEG signals using an eight-electrode placement on the scalp; two of these electrodes were placed in the frontal lobe, and the other six electrodes were placed in the temporal lobe. We performed experiments on eight subjects while they watched emotive videos. Six entropy measures were employed for extracting suitable features from the EEG signals. Next, we evaluated our proposed models using three popular classifiers: a support vector machine (SVM), multi-layer perceptron (MLP), and one-dimensional convolutional neural network (1D-CNN) for emotion classification; both subject-dependent and subject-independent strategies were used. Our experiment results showed that the highest average accuracies achieved in the subject-dependent and subject-independent cases were 85.81% and 78.52%, respectively; these accuracies were achieved using a combination of the sample entropy measure and 1D-CNN. Moreover, our study investigates the T8 position (above the right ear) in the temporal lobe as the most critical channel among the proposed measurement positions for emotion classification through electrode selection. Our results prove the feasibility and efficiency of our proposed EEG-based affective computing method for emotion recognition in real-world applications.


2019 ◽  
Vol 13 (3) ◽  
pp. 375-380 ◽  
Author(s):  
Anala Hari Krishna ◽  
Aravapalli Bhavya Sri ◽  
Kurakula Yuva Venkata Sai Priyanka ◽  
Sachin Taran ◽  
Varun Bajaj

2020 ◽  
Vol 10 (10) ◽  
pp. 672 ◽  
Author(s):  
Choong Wen Yean ◽  
Wan Khairunizam Wan Ahmad ◽  
Wan Azani Mustafa ◽  
Murugappan Murugappan ◽  
Yuvaraj Rajamanickam ◽  
...  

Emotion assessment in stroke patients gives meaningful information to physiotherapists to identify the appropriate method for treatment. This study was aimed to classify the emotions of stroke patients by applying bispectrum features in electroencephalogram (EEG) signals. EEG signals from three groups of subjects, namely stroke patients with left brain damage (LBD), right brain damage (RBD), and normal control (NC), were analyzed for six different emotional states. The estimated bispectrum mapped in the contour plots show the different appearance of nonlinearity in the EEG signals for different emotional states. Bispectrum features were extracted from the alpha (8–13) Hz, beta (13–30) Hz and gamma (30–49) Hz bands, respectively. The k-nearest neighbor (KNN) and probabilistic neural network (PNN) classifiers were used to classify the six emotions in LBD, RBD and NC. The bispectrum features showed statistical significance for all three groups. The beta frequency band was the best performing EEG frequency-sub band for emotion classification. The combination of alpha to gamma bands provides the highest classification accuracy in both KNN and PNN classifiers. Sadness emotion records the highest classification, which was 65.37% in LBD, 71.48% in RBD and 75.56% in NC groups.


Author(s):  
Jingxia Chen ◽  
Dongmei Jiang ◽  
Yanning Zhang ◽  
◽  

To effectively reduce the day-to-day fluctuations and differences in subjects’ brain electroencephalogram (EEG) signals and improve the accuracy and stability of EEG emotion classification, a new EEG feature extraction method based on common spatial pattern (CSP) and wavelet packet decomposition (WPD) is proposed. For the five-day emotion related EEG data of 12 subjects, the CSP algorithm is firstly used to project the raw EEG data into an optimal subspace to extract the discriminative features by maximizing the Kullback-Leibler (KL) divergences between the two categories of EEG data. Then the WPD algorithm is used to decompose the EEG signals into the related features in time-frequency domain. Finally, four state-of-the-art classifiers including Bagging tree, SVM, linear discriminant analysis and Bayesian linear discriminant analysis are used to make binary emotion classification. The experimental results show that with CSP spatial filtering, the emotion classification on the WPD features extracted with bior3.3 wavelet base gets the best accuracy of 0.862, which is 29.3% higher than that of the power spectral density (PSD) feature without CSP preprocessing, is 23% higher than that of the PSD feature with CSP preprocessing, is 1.9% higher than that of the WPD feature extracted with bior3.3 wavelet base without CSP preprocessing, and is 3.2% higher than that of the WPD feature extracted with the rbio6.8 wavelet base without CSP preprocessing. Our proposed method can effectively reduce the variance and non-stationary of the cross-day EEG signals, extract the emotion related features and improve the accuracy and stability of the cross-day EEG emotion classification. It is valuable for the development of robust emotional brain-computer interface applications.


2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Kit Hwa Cheah ◽  
Humaira Nisar ◽  
Vooi Voon Yap ◽  
Chen-Yi Lee ◽  
G. R. Sinha

Emotion is a crucial aspect of human health, and emotion recognition systems serve important roles in the development of neurofeedback applications. Most of the emotion recognition methods proposed in previous research take predefined EEG features as input to the classification algorithms. This paper investigates the less studied method of using plain EEG signals as the classifier input, with the residual networks (ResNet) as the classifier of interest. ResNet having excelled in the automated hierarchical feature extraction in raw data domains with vast number of samples (e.g., image processing) is potentially promising in the future as the amount of publicly available EEG databases has been increasing. Architecture of the original ResNet designed for image processing is restructured for optimal performance on EEG signals. The arrangement of convolutional kernel dimension is demonstrated to largely affect the model’s performance on EEG signal processing. The study is conducted on the Shanghai Jiao Tong University Emotion EEG Dataset (SEED), with our proposed ResNet18 architecture achieving 93.42% accuracy on the 3-class emotion classification, compared to the original ResNet18 at 87.06% accuracy. Our proposed ResNet18 architecture has also achieved a model parameter reduction of 52.22% from the original ResNet18. We have also compared the importance of different subsets of EEG channels from a total of 62 channels for emotion recognition. The channels placed near the anterior pole of the temporal lobes appeared to be most emotionally relevant. This agrees with the location of emotion-processing brain structures like the insular cortex and amygdala.


2021 ◽  
Vol 15 ◽  
Author(s):  
Jing Chen ◽  
Haifeng Li ◽  
Lin Ma ◽  
Hongjian Bo ◽  
Frank Soong ◽  
...  

Recently, emotion classification from electroencephalogram (EEG) data has attracted much attention. As EEG is an unsteady and rapidly changing voltage signal, the features extracted from EEG usually change dramatically, whereas emotion states change gradually. Most existing feature extraction approaches do not consider these differences between EEG and emotion. Microstate analysis could capture important spatio-temporal properties of EEG signals. At the same time, it could reduce the fast-changing EEG signals to a sequence of prototypical topographical maps. While microstate analysis has been widely used to study brain function, few studies have used this method to analyze how brain responds to emotional auditory stimuli. In this study, the authors proposed a novel feature extraction method based on EEG microstates for emotion recognition. Determining the optimal number of microstates automatically is a challenge for applying microstate analysis to emotion. This research proposed dual-threshold-based atomize and agglomerate hierarchical clustering (DTAAHC) to determine the optimal number of microstate classes automatically. By using the proposed method to model the temporal dynamics of auditory emotion process, we extracted microstate characteristics as novel temporospatial features to improve the performance of emotion recognition from EEG signals. We evaluated the proposed method on two datasets. For public music-evoked EEG Dataset for Emotion Analysis using Physiological signals, the microstate analysis identified 10 microstates which together explained around 86% of the data in global field power peaks. The accuracy of emotion recognition achieved 75.8% in valence and 77.1% in arousal using microstate sequence characteristics as features. Compared to previous studies, the proposed method outperformed the current feature sets. For the speech-evoked EEG dataset, the microstate analysis identified nine microstates which together explained around 85% of the data. The accuracy of emotion recognition achieved 74.2% in valence and 72.3% in arousal using microstate sequence characteristics as features. The experimental results indicated that microstate characteristics can effectively improve the performance of emotion recognition from EEG signals.


2017 ◽  
Vol 05 (03) ◽  
pp. 75-79 ◽  
Author(s):  
Adrian Qi-Xiang Ang ◽  
Yi Qi Yeong ◽  
Wee Wee

Sign in / Sign up

Export Citation Format

Share Document