scholarly journals Recognition of Emotional States using Multiscale Information Analysis of High Frequency EEG Oscillations

Entropy ◽  
2019 ◽  
Vol 21 (6) ◽  
pp. 609 ◽  
Author(s):  
Gao ◽  
Cui ◽  
Wan ◽  
Gu

Exploring the manifestation of emotion in electroencephalogram (EEG) signals is helpful for improving the accuracy of emotion recognition. This paper introduced the novel features based on the multiscale information analysis (MIA) of EEG signals for distinguishing emotional states in four dimensions based on Russell's circumplex model. The algorithms were applied to extract features on the DEAP database, which included multiscale EEG complexity index in the time domain, and ensemble empirical mode decomposition enhanced energy and fuzzy entropy in the frequency domain. The support vector machine and cross validation method were applied to assess classification accuracy. The classification performance of MIA methods (accuracy = 62.01%, precision = 62.03%, recall/sensitivity = 60.51%, and specificity = 82.80%) was much higher than classical methods (accuracy = 43.98%, precision = 43.81%, recall/sensitivity = 41.86%, and specificity = 70.50%), which extracted features contain similar energy based on a discrete wavelet transform, fractal dimension, and sample entropy. In this study, we found that emotion recognition is more associated with high frequency oscillations (51–100Hz) of EEG signals rather than low frequency oscillations (0.3–49Hz), and the significance of the frontal and temporal regions are higher than other regions. Such information has predictive power and may provide more insights into analyzing the multiscale information of high frequency oscillations in EEG signals.

Author(s):  
Muhammad Ibrahim Munir ◽  
Sajid Hussain ◽  
Ali Al-Alili ◽  
Reem Al Ameri ◽  
Ehab El-Sadaany

Abstract One of the core features of the smart grid deemed essential for smooth grid operation is the detection and diagnosis of system failures. For a utility transmission grid system, these failures could manifest in the form of short circuit faults and open circuit faults. Due to the advent of the digital age, the traditional grid has also undergone a massive transition to digital equipment and modern sensors which are capable of generating large volumes of data. The challenge is to preprocess this data such that it can be utilized for the detection of transients and grid failures. This paper presents the incorporation of artificial intelligence techniques such as Support Vector Machine (SVM) and K-Nearest Neighbors (KNN) to detect and comprehensively classify the most common fault transients within a reasonable range of accuracy. For gauging the effectiveness of the proposed scheme, a thorough evaluation study is conducted on a modified IEEE-39 bus system. Bus voltage and line current measurements are taken for a range of fault scenarios which result in high-frequency transient signals. These signals are analyzed using continuous wavelet transform (CWT). The measured signals are afterward preprocessed using Discrete Wavelet Transform (DWT) employing Daubechies four (Db4) mother wavelet in order to decompose the high-frequency components of the faulty signals. DWT results in a range of high and low-frequency detail and approximate coefficients, from which a range of statistical features are extracted and used as inputs for training and testing the classification algorithms. The results demonstrate that the trained models can be successfully employed to detect and classify faults on the transmission system with acceptable accuracy.


Author(s):  
Benjamin Aribisala ◽  
Obaro Olori ◽  
Patrick Owate

Introduction: Emotion plays a key role in our daily life and work, especially in decision making, as people's moods can influence their mode of communication, behaviour or productivity. Emotion recognition has attracted some research works and medical imaging technology offers tools for emotion classification. Aims: The aim of this work is to develop a machine learning technique for recognizing emotion based on Electroencephalogram (EEG) data Materials and Methods: Experimentation was based on a publicly available EEG Dataset for Emotion Analysis using Physiological (DEAP). The data comprises of EEG signals acquired from thirty two adults while watching forty 40 different musical video clips of one minute each. Participants rated each video in terms of four emotional states, namely, arousal, valence, like/dislike and dominance. We extracted some features from the dataset using Discrete Wavelet Transforms to extract wavelet energy, wavelet entropy, and standard deviation. We then classified the extracted features into four emotional states, namely, High Valence/High Arousal, High Valance/Low Arousal, Low Valence/High Arousal, and Low Valence/Low Arousal using Ensemble Bagged Trees. Results: Ensemble Bagged Trees gave sensitivity, specificity, and accuracy of 97.54%, 99.21%, and 97.80% respectively. Support Vector Machine and Ensemble Boosted Tree gave similar results. Conclusion: Our results showed that machine learning classification of emotion using EEG data is very promising. This can help in the treatment of patients, especially those with expression problems like Amyotrophic Lateral Sclerosis which is a muscle disease, the real emotional state of patients will help doctors to provide appropriate medical care. Keywords: Electroencephalogram, Emotions Recognition, Ensemble Classification, Ensemble Bagged Trees, Machine Learning


2017 ◽  
Vol 2017 ◽  
pp. 1-9 ◽  
Author(s):  
Ning Zhuang ◽  
Ying Zeng ◽  
Li Tong ◽  
Chi Zhang ◽  
Hanming Zhang ◽  
...  

This paper introduces a method for feature extraction and emotion recognition based on empirical mode decomposition (EMD). By using EMD, EEG signals are decomposed into Intrinsic Mode Functions (IMFs) automatically. Multidimensional information of IMF is utilized as features, the first difference of time series, the first difference of phase, and the normalized energy. The performance of the proposed method is verified on a publicly available emotional database. The results show that the three features are effective for emotion recognition. The role of each IMF is inquired and we find that high frequency component IMF1 has significant effect on different emotional states detection. The informative electrodes based on EMD strategy are analyzed. In addition, the classification accuracy of the proposed method is compared with several classical techniques, including fractal dimension (FD), sample entropy, differential entropy, and discrete wavelet transform (DWT). Experiment results on DEAP datasets demonstrate that our method can improve emotion recognition performance.


Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5218 ◽  
Author(s):  
Muhammad Adeel Asghar ◽  
Muhammad Jamil Khan ◽  
Fawad ◽  
Yasar Amin ◽  
Muhammad Rizwan ◽  
...  

Much attention has been paid to the recognition of human emotions with the help of electroencephalogram (EEG) signals based on machine learning technology. Recognizing emotions is a challenging task due to the non-linear property of the EEG signal. This paper presents an advanced signal processing method using the deep neural network (DNN) for emotion recognition based on EEG signals. The spectral and temporal components of the raw EEG signal are first retained in the 2D Spectrogram before the extraction of features. The pre-trained AlexNet model is used to extract the raw features from the 2D Spectrogram for each channel. To reduce the feature dimensionality, spatial, and temporal based, bag of deep features (BoDF) model is proposed. A series of vocabularies consisting of 10 cluster centers of each class is calculated using the k-means cluster algorithm. Lastly, the emotion of each subject is represented using the histogram of the vocabulary set collected from the raw-feature of a single channel. Features extracted from the proposed BoDF model have considerably smaller dimensions. The proposed model achieves better classification accuracy compared to the recently reported work when validated on SJTU SEED and DEAP data sets. For optimal classification performance, we use a support vector machine (SVM) and k-nearest neighbor (k-NN) to classify the extracted features for the different emotional states of the two data sets. The BoDF model achieves 93.8% accuracy in the SEED data set and 77.4% accuracy in the DEAP data set, which is more accurate compared to other state-of-the-art methods of human emotion recognition.


2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Ayan Seal ◽  
Puthi Prem Nivesh Reddy ◽  
Pingali Chaithanya ◽  
Arramada Meghana ◽  
Kamireddy Jahnavi ◽  
...  

Human emotion recognition has been a major field of research in the last decades owing to its noteworthy academic and industrial applications. However, most of the state-of-the-art methods identified emotions after analyzing facial images. Emotion recognition using electroencephalogram (EEG) signals has got less attention. However, the advantage of using EEG signals is that it can capture real emotion. However, very few EEG signals databases are publicly available for affective computing. In this work, we present a database consisting of EEG signals of 44 volunteers. Twenty-three out of forty-four are females. A 32 channels CLARITY EEG traveler sensor is used to record four emotional states namely, happy, fear, sad, and neutral of subjects by showing 12 videos. So, 3 video files are devoted to each emotion. Participants are mapped with the emotion that they had felt after watching each video. The recorded EEG signals are considered further to classify four types of emotions based on discrete wavelet transform and extreme learning machine (ELM) for reporting the initial benchmark classification performance. The ELM algorithm is used for channel selection followed by subband selection. The proposed method performs the best when features are captured from the gamma subband of the FP1-F7 channel with 94.72% accuracy. The presented database would be available to the researchers for affective recognition applications.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5135
Author(s):  
Ngoc-Dau Mai ◽  
Boon-Giin Lee ◽  
Wan-Young Chung

In this research, we develop an affective computing method based on machine learning for emotion recognition using a wireless protocol and a wearable electroencephalography (EEG) custom-designed device. The system collects EEG signals using an eight-electrode placement on the scalp; two of these electrodes were placed in the frontal lobe, and the other six electrodes were placed in the temporal lobe. We performed experiments on eight subjects while they watched emotive videos. Six entropy measures were employed for extracting suitable features from the EEG signals. Next, we evaluated our proposed models using three popular classifiers: a support vector machine (SVM), multi-layer perceptron (MLP), and one-dimensional convolutional neural network (1D-CNN) for emotion classification; both subject-dependent and subject-independent strategies were used. Our experiment results showed that the highest average accuracies achieved in the subject-dependent and subject-independent cases were 85.81% and 78.52%, respectively; these accuracies were achieved using a combination of the sample entropy measure and 1D-CNN. Moreover, our study investigates the T8 position (above the right ear) in the temporal lobe as the most critical channel among the proposed measurement positions for emotion classification through electrode selection. Our results prove the feasibility and efficiency of our proposed EEG-based affective computing method for emotion recognition in real-world applications.


2014 ◽  
Vol 14 (2) ◽  
pp. 102-108 ◽  
Author(s):  
Yong Yang ◽  
Shuying Huang ◽  
Junfeng Gao ◽  
Zhongsheng Qian

Abstract In this paper, by considering the main objective of multi-focus image fusion and the physical meaning of wavelet coefficients, a discrete wavelet transform (DWT) based fusion technique with a novel coefficients selection algorithm is presented. After the source images are decomposed by DWT, two different window-based fusion rules are separately employed to combine the low frequency and high frequency coefficients. In the method, the coefficients in the low frequency domain with maximum sharpness focus measure are selected as coefficients of the fused image, and a maximum neighboring energy based fusion scheme is proposed to select high frequency sub-bands coefficients. In order to guarantee the homogeneity of the resultant fused image, a consistency verification procedure is applied to the combined coefficients. The performance assessment of the proposed method was conducted in both synthetic and real multi-focus images. Experimental results demonstrate that the proposed method can achieve better visual quality and objective evaluation indexes than several existing fusion methods, thus being an effective multi-focus image fusion method.


Author(s):  
Yunxuan Li ◽  
Jian Lu ◽  
Lin Zhang ◽  
Yi Zhao

The Didi Dache app is China’s biggest taxi booking mobile app and is popular in cities. Unsurprisingly, short-term traffic demand forecasting is critical to enabling Didi Dache to maximize use by drivers and ensure that riders can always find a car whenever and wherever they may need a ride. In this paper, a short-term traffic demand forecasting model, Wave SVM, is proposed. It combines the complementary advantages of Daubechies5 wavelets analysis and least squares support vector machine (LS-SVM) models while it overcomes their respective shortcomings. This method includes four stages: in the first stage, original data are preprocessed; in the second stage, these data are decomposed into high-frequency and low-frequency series by wavelet; in the third stage, the prediction stage, the LS-SVM method is applied to train and predict the corresponding high-frequency and low-frequency series; in the last stage, the diverse predicted sequences are reconstructed by wavelet. The real taxi-hailing orders data are applied to evaluate the model’s performance and practicality, and the results are encouraging. The Wave SVM model, compared with the prediction error of state-of-the-art models, not only has the best prediction performance but also appears to be the most capable of capturing the nonstationary characteristics of the short-term traffic dynamic systems.


2021 ◽  
Vol 335 ◽  
pp. 04001
Author(s):  
Didar Dadebayev ◽  
Goh Wei Wei ◽  
Tan Ee Xion

Emotion recognition, as a branch of affective computing, has attracted great attention in the last decades as it can enable more natural brain-computer interface systems. Electroencephalography (EEG) has proven to be an effective modality for emotion recognition, with which user affective states can be tracked and recorded, especially for primitive emotional events such as arousal and valence. Although brain signals have been shown to correlate with emotional states, the effectiveness of proposed models is somewhat limited. The challenge is improving accuracy, while appropriate extraction of valuable features might be a key to success. This study proposes a framework based on incorporating fractal dimension features and recursive feature elimination approach to enhance the accuracy of EEG-based emotion recognition. The fractal dimension and spectrum-based features to be extracted and used for more accurate emotional state recognition. Recursive Feature Elimination will be used as a feature selection method, whereas the classification of emotions will be performed by the Support Vector Machine (SVM) algorithm. The proposed framework will be tested with a widely used public database, and results are expected to demonstrate higher accuracy and robustness compared to other studies. The contributions of this study are primarily about the improvement of the EEG-based emotion classification accuracy. There is a potential restriction of how generic the results can be as different EEG dataset might yield different results for the same framework. Therefore, experimenting with different EEG dataset and testing alternative feature selection schemes can be very interesting for future work.


2016 ◽  
Vol 7 (1) ◽  
pp. 58-68 ◽  
Author(s):  
Imen Trabelsi ◽  
Med Salim Bouhlel

Automatic Speech Emotion Recognition (SER) is a current research topic in the field of Human Computer Interaction (HCI) with a wide range of applications. The purpose of speech emotion recognition system is to automatically classify speaker's utterances into different emotional states such as disgust, boredom, sadness, neutral, and happiness. The speech samples in this paper are from the Berlin emotional database. Mel Frequency cepstrum coefficients (MFCC), Linear prediction coefficients (LPC), linear prediction cepstrum coefficients (LPCC), Perceptual Linear Prediction (PLP) and Relative Spectral Perceptual Linear Prediction (Rasta-PLP) features are used to characterize the emotional utterances using a combination between Gaussian mixture models (GMM) and Support Vector Machines (SVM) based on the Kullback-Leibler Divergence Kernel. In this study, the effect of feature type and its dimension are comparatively investigated. The best results are obtained with 12-coefficient MFCC. Utilizing the proposed features a recognition rate of 84% has been achieved which is close to the performance of humans on this database.


Sign in / Sign up

Export Citation Format

Share Document