EMOTION ASSESSMENT TOOL FOR HUMAN-MACHINE INTERFACES - Using EEG Data and Multimedia Stimuli Towards Emotion Classification

2020 ◽  
Vol 10 (10) ◽  
pp. 672 ◽  
Author(s):  
Choong Wen Yean ◽  
Wan Khairunizam Wan Ahmad ◽  
Wan Azani Mustafa ◽  
Murugappan Murugappan ◽  
Yuvaraj Rajamanickam ◽  
...  

Emotion assessment in stroke patients gives meaningful information to physiotherapists to identify the appropriate method for treatment. This study was aimed to classify the emotions of stroke patients by applying bispectrum features in electroencephalogram (EEG) signals. EEG signals from three groups of subjects, namely stroke patients with left brain damage (LBD), right brain damage (RBD), and normal control (NC), were analyzed for six different emotional states. The estimated bispectrum mapped in the contour plots show the different appearance of nonlinearity in the EEG signals for different emotional states. Bispectrum features were extracted from the alpha (8–13) Hz, beta (13–30) Hz and gamma (30–49) Hz bands, respectively. The k-nearest neighbor (KNN) and probabilistic neural network (PNN) classifiers were used to classify the six emotions in LBD, RBD and NC. The bispectrum features showed statistical significance for all three groups. The beta frequency band was the best performing EEG frequency-sub band for emotion classification. The combination of alpha to gamma bands provides the highest classification accuracy in both KNN and PNN classifiers. Sadness emotion records the highest classification, which was 65.37% in LBD, 71.48% in RBD and 75.56% in NC groups.


Author(s):  
Jingxia Chen ◽  
Dongmei Jiang ◽  
Yanning Zhang ◽  
◽  

To effectively reduce the day-to-day fluctuations and differences in subjects’ brain electroencephalogram (EEG) signals and improve the accuracy and stability of EEG emotion classification, a new EEG feature extraction method based on common spatial pattern (CSP) and wavelet packet decomposition (WPD) is proposed. For the five-day emotion related EEG data of 12 subjects, the CSP algorithm is firstly used to project the raw EEG data into an optimal subspace to extract the discriminative features by maximizing the Kullback-Leibler (KL) divergences between the two categories of EEG data. Then the WPD algorithm is used to decompose the EEG signals into the related features in time-frequency domain. Finally, four state-of-the-art classifiers including Bagging tree, SVM, linear discriminant analysis and Bayesian linear discriminant analysis are used to make binary emotion classification. The experimental results show that with CSP spatial filtering, the emotion classification on the WPD features extracted with bior3.3 wavelet base gets the best accuracy of 0.862, which is 29.3% higher than that of the power spectral density (PSD) feature without CSP preprocessing, is 23% higher than that of the PSD feature with CSP preprocessing, is 1.9% higher than that of the WPD feature extracted with bior3.3 wavelet base without CSP preprocessing, and is 3.2% higher than that of the WPD feature extracted with the rbio6.8 wavelet base without CSP preprocessing. Our proposed method can effectively reduce the variance and non-stationary of the cross-day EEG signals, extract the emotion related features and improve the accuracy and stability of the cross-day EEG emotion classification. It is valuable for the development of robust emotional brain-computer interface applications.


2021 ◽  
Vol 15 ◽  
Author(s):  
Dongwei Chen ◽  
Rui Miao ◽  
Zhaoyong Deng ◽  
Na Han ◽  
Chunjian Deng

In recent years, affective computing based on electroencephalogram (EEG) data has attracted increased attention. As a classic EEG feature extraction model, Granger causality analysis has been widely used in emotion classification models, which construct a brain network by calculating the causal relationships between EEG sensors and select the key EEG features. Traditional EEG Granger causality analysis uses the L2 norm to extract features from the data, and so the results are susceptible to EEG artifacts. Recently, several researchers have proposed Granger causality analysis models based on the least absolute shrinkage and selection operator (LASSO) and the L1/2 norm to solve this problem. However, the conventional sparse Granger causality analysis model assumes that the connections between each sensor have the same prior probability. This paper shows that if the correlation between the EEG data from each sensor can be added to the Granger causality network as prior knowledge, the EEG feature selection ability and emotional classification ability of the sparse Granger causality model can be enhanced. Based on this idea, we propose a new emotional computing model, named the sparse Granger causality analysis model based on sensor correlation (SC-SGA). SC-SGA integrates the correlation between sensors as prior knowledge into the Granger causality analysis based on the L1/2 norm framework for feature extraction, and uses L2 norm logistic regression as the emotional classification algorithm. We report the results of experiments using two real EEG emotion datasets. These results demonstrate that the emotion classification accuracy of the SC-SGA model is better than that of existing models by 2.46–21.81%.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1589
Author(s):  
Arijit Nandi ◽  
Fatos Xhafa ◽  
Laia Subirats ◽  
Santi Fort

In face-to-face and online learning, emotions and emotional intelligence have an influence and play an essential role. Learners’ emotions are crucial for e-learning system because they promote or restrain the learning. Many researchers have investigated the impacts of emotions in enhancing and maximizing e-learning outcomes. Several machine learning and deep learning approaches have also been proposed to achieve this goal. All such approaches are suitable for an offline mode, where the data for emotion classification are stored and can be accessed infinitely. However, these offline mode approaches are inappropriate for real-time emotion classification when the data are coming in a continuous stream and data can be seen to the model at once only. We also need real-time responses according to the emotional state. For this, we propose a real-time emotion classification system (RECS)-based Logistic Regression (LR) trained in an online fashion using the Stochastic Gradient Descent (SGD) algorithm. The proposed RECS is capable of classifying emotions in real-time by training the model in an online fashion using an EEG signal stream. To validate the performance of RECS, we have used the DEAP data set, which is the most widely used benchmark data set for emotion classification. The results show that the proposed approach can effectively classify emotions in real-time from the EEG data stream, which achieved a better accuracy and F1-score than other offline and online approaches. The developed real-time emotion classification system is analyzed in an e-learning context scenario.


2021 ◽  
Vol 15 ◽  
Author(s):  
Jin Zhang ◽  
Ziming Xu ◽  
Yueying Zhou ◽  
Pengpai Wang ◽  
Ping Fu ◽  
...  

Emotional singing can affect vocal performance and the audience’s engagement. Chinese universities use traditional training techniques for teaching theoretical and applied knowledge. Self-imagination is the predominant training method for emotional singing. Recently, virtual reality (VR) technologies have been applied in several fields for training purposes. In this empirical comparative study, a VR training task was implemented to elicit emotions from singers and further assist them with improving their emotional singing performance. The VR training method was compared against the traditional self-imagination method. By conducting a two-stage experiment, the two methods were compared in terms of emotions’ elicitation and emotional singing performance. In the first stage, electroencephalographic (EEG) data were collected from the subjects. In the second stage, self-rating reports and third-party teachers’ evaluations were collected. The EEG data were analyzed by adopting the max-relevance and min-redundancy algorithm for feature selection and the support vector machine (SVM) for emotion recognition. Based on the results of EEG emotion classification and subjective scale, VR can better elicit the positive, neutral, and negative emotional states from the singers than not using this technology (i.e., self-imagination). Furthermore, due to the improvement of emotional activation, VR brings the improvement of singing performance. The VR hence appears to be an effective approach that may improve and complement the available vocal music teaching methods.


2021 ◽  
Author(s):  
Nikhil Garg ◽  
Rohit Garg ◽  
Parrivesh NS ◽  
Apoorv Anand ◽  
V.A.S. Abhinav ◽  
...  

This paper focuses on classifying emotions on the valence-arousal plane using various feature extraction, feature selection and machine learning techniques. Emotion classification using EEG data and machine learning techniques has been on the rise in the recent past. We evaluate different feature extraction techniques, feature selection techniques and propose the optimal set of features and electrodes for emotion recognition. The images from the OASIS image dataset were used for eliciting the Valence and Arousal emotions, and the EEG data was recorded using the Emotiv Epoc X mobile EEG headset. The analysis is additionally carried out on publicly available datasets: DEAP and DREAMER. We propose a novel feature ranking technique and incremental learning approach to analyze the dependence of performance on the number of participants. Leave one out cross-validation was carried out to identify subject bias in emotion elicitation patterns. The importance of different electrode locations was calculated, which could be used for designing a headset for emotion recognition. Our study achieved root mean square errors of less than 0.75 on DREAMER, 1.76 on DEAP, and 2.39 on our dataset.


Sign in / Sign up

Export Citation Format

Share Document