scholarly journals Recognition of emotional states using EEG signals based on time-frequency analysis and SVM classifier

Author(s):  
Fabian Parsia George ◽  
Istiaque Mannafee Shaikat ◽  
Prommy Sultana Ferdawoos Hossain ◽  
Mohammad Zavid Parvez ◽  
Jia Uddin

The recognition of emotions is a vast significance and a high developing field of research in the recent years. The applications of emotion recognition have left an exceptional mark in various fields including education and research. Traditional approaches used facial expressions or voice intonation to detect emotions, however, facial gestures and spoken language can lead to biased and ambiguous results. This is why, researchers have started to use electroencephalogram (EEG) technique which is well defined method for emotion recognition. Some approaches used standard and pre-defined methods of the signal processing area and some worked with either fewer channels or fewer subjects to record EEG signals for their research. This paper proposed an emotion detection method based on time-frequency domain statistical features. Box-and-whisker plot is used to select the optimal features, which are later feed to SVM classifier for training and testing the DEAP dataset, where 32 participants with different gender and age groups are considered. The experimental results show that the proposed method exhibits 92.36% accuracy for our tested dataset. In addition, the proposed method outperforms than the state-of-art methods by exhibiting higher accuracy.

Fractals ◽  
2018 ◽  
Vol 26 (04) ◽  
pp. 1850051 ◽  
Author(s):  
HAMIDREZA NAMAZI ◽  
SAJAD JAFARI

It is known that aging affects neuroplasticity. On the other hand, neuroplasticity can be studied by analyzing the electroencephalogram (EEG) signal. An important challenge in brain research is to study the variations of neuroplasticity during aging for patients suffering from epilepsy. This study investigates the variations of the complexity of EEG signal during aging for patients with epilepsy. For this purpose, we employed fractal dimension as an indicator of process complexity. We classified the subjects in different age groups and computed the fractal dimension of their EEG signals. Our investigations showed that as patients get older, their EEG signal will be more complex. The method of investigation that has been used in this study can be further employed to study the variations of EEG signal in case of other brain disorders during aging.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Ahmet Mert ◽  
Hasan Huseyin Celik

Abstract The feasibility of using time–frequency (TF) ridges estimation is investigated on multi-channel electroencephalogram (EEG) signals for emotional recognition. Without decreasing accuracy rate of the valence/arousal recognition, the informative component extraction with low computational cost will be examined using multivariate ridge estimation. The advanced TF representation technique called multivariate synchrosqueezing transform (MSST) is used to obtain well-localized components of multi-channel EEG signals. Maximum-energy components in the 2D TF distribution are determined using TF-ridges estimation to extract instantaneous frequency and instantaneous amplitude, respectively. The statistical values of the estimated ridges are used as a feature vector to the inputs of machine learning algorithms. Thus, component information in multi-channel EEG signals can be captured and compressed into low dimensional space for emotion recognition. Mean and variance values of the five maximum-energy ridges in the MSST based TF distribution are adopted as feature vector. Properties of five TF-ridges in frequency and energy plane (e.g., mean frequency, frequency deviation, mean energy, and energy deviation over time) are computed to obtain 20-dimensional feature space. The proposed method is performed on the DEAP emotional EEG recordings for benchmarking, and the recognition rates are yielded up to 71.55, and 70.02% for high/low arousal, and high/low valence, respectively.


Author(s):  
Haitham Issa ◽  
Sali Issa ◽  
Wahab Shah

This paper presents a new gender and age classification system based on Electroencephalography (EEG) brain signals. First, Continuous Wavelet Transform (CWT) technique is used to get the time-frequency information of only one EEG electrode for eight distinct emotional states instead of the ordinary neutral or relax states. Then, sequential steps are implemented to extract the improved grayscale image feature. For system evaluation, a three-fold-cross validation strategy is applied to construct four different classifiers. The experimental test shows that the proposed extracted feature with Convolutional Neural Network (CNN) classifier improves the performance of both gender and age classification, and achieves an average accuracy of 96.3% and 89% for gender and age classification, respectively. Moreover, the ability to predict human gender and age during the mood of different emotional states is practically approved.


Entropy ◽  
2019 ◽  
Vol 21 (6) ◽  
pp. 609 ◽  
Author(s):  
Gao ◽  
Cui ◽  
Wan ◽  
Gu

Exploring the manifestation of emotion in electroencephalogram (EEG) signals is helpful for improving the accuracy of emotion recognition. This paper introduced the novel features based on the multiscale information analysis (MIA) of EEG signals for distinguishing emotional states in four dimensions based on Russell's circumplex model. The algorithms were applied to extract features on the DEAP database, which included multiscale EEG complexity index in the time domain, and ensemble empirical mode decomposition enhanced energy and fuzzy entropy in the frequency domain. The support vector machine and cross validation method were applied to assess classification accuracy. The classification performance of MIA methods (accuracy = 62.01%, precision = 62.03%, recall/sensitivity = 60.51%, and specificity = 82.80%) was much higher than classical methods (accuracy = 43.98%, precision = 43.81%, recall/sensitivity = 41.86%, and specificity = 70.50%), which extracted features contain similar energy based on a discrete wavelet transform, fractal dimension, and sample entropy. In this study, we found that emotion recognition is more associated with high frequency oscillations (51–100Hz) of EEG signals rather than low frequency oscillations (0.3–49Hz), and the significance of the frontal and temporal regions are higher than other regions. Such information has predictive power and may provide more insights into analyzing the multiscale information of high frequency oscillations in EEG signals.


2020 ◽  
Vol 2020 ◽  
pp. 1-19
Author(s):  
Nazmi Sofian Suhaimi ◽  
James Mountstephens ◽  
Jason Teo

Emotions are fundamental for human beings and play an important role in human cognition. Emotion is commonly associated with logical decision making, perception, human interaction, and to a certain extent, human intelligence itself. With the growing interest of the research community towards establishing some meaningful “emotional” interactions between humans and computers, the need for reliable and deployable solutions for the identification of human emotional states is required. Recent developments in using electroencephalography (EEG) for emotion recognition have garnered strong interest from the research community as the latest developments in consumer-grade wearable EEG solutions can provide a cheap, portable, and simple solution for identifying emotions. Since the last comprehensive review was conducted back from the years 2009 to 2016, this paper will update on the current progress of emotion recognition using EEG signals from 2016 to 2019. The focus on this state-of-the-art review focuses on the elements of emotion stimuli type and presentation approach, study size, EEG hardware, machine learning classifiers, and classification approach. From this state-of-the-art review, we suggest several future research opportunities including proposing a different approach in presenting the stimuli in the form of virtual reality (VR). To this end, an additional section devoted specifically to reviewing only VR studies within this research domain is presented as the motivation for this proposed new approach using VR as the stimuli presentation device. This review paper is intended to be useful for the research community working on emotion recognition using EEG signals as well as for those who are venturing into this field of research.


Sensors ◽  
2018 ◽  
Vol 18 (8) ◽  
pp. 2739 ◽  
Author(s):  
Rami Alazrai ◽  
Rasha Homoud ◽  
Hisham Alwanni ◽  
Mohammad Daoud

Accurate recognition and understating of human emotions is an essential skill that can improve the collaboration between humans and machines. In this vein, electroencephalogram (EEG)-based emotion recognition is considered an active research field with challenging issues regarding the analyses of the nonstationary EEG signals and the extraction of salient features that can be used to achieve accurate emotion recognition. In this paper, an EEG-based emotion recognition approach with a novel time-frequency feature extraction technique is presented. In particular, a quadratic time-frequency distribution (QTFD) is employed to construct a high resolution time-frequency representation of the EEG signals and capture the spectral variations of the EEG signals over time. To reduce the dimensionality of the constructed QTFD-based representation, a set of 13 time- and frequency-domain features is extended to the joint time-frequency-domain and employed to quantify the QTFD-based time-frequency representation of the EEG signals. Moreover, to describe different emotion classes, we have utilized the 2D arousal-valence plane to develop four emotion labeling schemes of the EEG signals, such that each emotion labeling scheme defines a set of emotion classes. The extracted time-frequency features are used to construct a set of subject-specific support vector machine classifiers to classify the EEG signals of each subject into the different emotion classes that are defined using each of the four emotion labeling schemes. The performance of the proposed approach is evaluated using a publicly available EEG dataset, namely the DEAPdataset. Moreover, we design three performance evaluation analyses, namely the channel-based analysis, feature-based analysis and neutral class exclusion analysis, to quantify the effects of utilizing different groups of EEG channels that cover various regions in the brain, reducing the dimensionality of the extracted time-frequency features and excluding the EEG signals that correspond to the neutral class, on the capability of the proposed approach to discriminate between different emotion classes. The results reported in the current study demonstrate the efficacy of the proposed QTFD-based approach in recognizing different emotion classes. In particular, the average classification accuracies obtained in differentiating between the various emotion classes defined using each of the four emotion labeling schemes are within the range of 73 . 8 % – 86 . 2 % . Moreover, the emotion classification accuracies achieved by our proposed approach are higher than the results reported in several existing state-of-the-art EEG-based emotion recognition studies.


Author(s):  
S. Raghu ◽  
N. Sriraam ◽  
G. Pradeep Kumar

The scaling behavior of human electroencephalogram (EEG) signals is well exploited by appropriate extraction of time – frequency domain and entropy based features. Such measurable inherently helps understanding the neurophysiological phenomenon of brain as well as its associated cortical activities. Being a non-linear time series, EEG's are assumed to be fragment of fluctuations. Several attempts have been made to study the EEG signals for clinical applications such as epileptic seizure detection, evoked response potential recognition, tumor detection, identification of alcoholics and so on. In all such applications appropriate selection of feature parameter plays an important role in discriminating normal EEG from abnormal. In the recent past one can find the importance of wavelet and wavelet packet towards EEG analysis. This proposed research work investigates the effect of wavelet packet log energy entropy on EEG signals. Entropy being the measure of relative information, the proposed study attempts to discriminate the normal EEGs from abnormal EEG's by employing the log energy entropy features. For better brevity, this study restricts to the analysis of epileptic seizure from normal EEGs. Different decomposition levels from 2 to 5 were considered for wavelet packets with application of Haar, rbio3.1, sym7, dmey wavelets. A one second windowing was introduced for the data segmentation and Shannon's log energy entropy was estimated. Then the statistical non-parametric Wilcoxon model was employed. The result shows that the application of wavelet packet log energy entropy found to be a potential indicator for discriminating epileptic seizure from normal.


Author(s):  
Manal Tantawi ◽  
Aya Naser ◽  
Howida Shedeed ◽  
Mohammed Fahmy Tolba

Electroencephalogram (EEG) signals are a valuable source of information for detecting epileptic seizures. However, monitoring EEG for long periods of time is very exhausting and time consuming. Thus, detecting epilepsy in EEG signals automatically is highly appreciated. In this study, three classes, namely normal, interictal (out of seizure time), and ictal (during seizure), are considered. Moreover, a comparative study is provided for the efficient features in literature resulting in a suggested combination of only three discriminative features, namely R'enyi entropy, line length, and energy. These features are calculated from each of the EEG sub-bands. Finally, support vector machines (SVM) classifier optimized using BAT algorithm (BAT-SVM) is introduced by this study for discriminating between the three classes. Experiments were conducted using Andrzejak database. The accomplished experiments and comparisons in this study emphasize the superiority of the proposed BAT-SVM along with the suggested feature set in achieving the best results.


Computers ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 95
Author(s):  
Rania Alhalaseh ◽  
Suzan Alasasfeh

Many scientific studies have been concerned with building an automatic system to recognize emotions, and building such systems usually relies on brain signals. These studies have shown that brain signals can be used to classify many emotional states. This process is considered difficult, especially since the brain’s signals are not stable. Human emotions are generated as a result of reactions to different emotional states, which affect brain signals. Thus, the performance of emotion recognition systems by brain signals depends on the efficiency of the algorithms used to extract features, the feature selection algorithm, and the classification process. Recently, the study of electroencephalography (EEG) signaling has received much attention due to the availability of several standard databases, especially since brain signal recording devices have become available in the market, including wireless ones, at reasonable prices. This work aims to present an automated model for identifying emotions based on EEG signals. The proposed model focuses on creating an effective method that combines the basic stages of EEG signal handling and feature extraction. Different from previous studies, the main contribution of this work relies in using empirical mode decomposition/intrinsic mode functions (EMD/IMF) and variational mode decomposition (VMD) for signal processing purposes. Despite the fact that EMD/IMFs and VMD methods are widely used in biomedical and disease-related studies, they are not commonly utilized in emotion recognition. In other words, the methods used in the signal processing stage in this work are different from the methods used in literature. After the signal processing stage, namely in the feature extraction stage, two well-known technologies were used: entropy and Higuchi’s fractal dimension (HFD). Finally, in the classification stage, four classification methods were used—naïve Bayes, k-nearest neighbor (k-NN), convolutional neural network (CNN), and decision tree (DT)—for classifying emotional states. To evaluate the performance of our proposed model, experiments were applied to a common database called DEAP based on many evaluation models, including accuracy, specificity, and sensitivity. The experiments showed the efficiency of the proposed method; a 95.20% accuracy was achieved using the CNN-based method.


Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7103
Author(s):  
Heekyung Yang ◽  
Jongdae Han ◽  
Kyungha Min

Electroencephalogram (EEG) biosignals are widely used to measure human emotional reactions. The recent progress of deep learning-based classification models has improved the accuracy of emotion recognition in EEG signals. We apply a deep learning-based emotion recognition model from EEG biosignals to prove that illustrated surgical images reduce the negative emotional reactions that the photographic surgical images generate. The strong negative emotional reactions caused by surgical images, which show the internal structure of the human body (including blood, flesh, muscle, fatty tissue, and bone) act as an obstacle in explaining the images to patients or communicating with the images with non-professional people. We claim that the negative emotional reactions generated by illustrated surgical images are less severe than those caused by raw surgical images. To demonstrate the difference in emotional reaction, we produce several illustrated surgical images from photographs and measure the emotional reactions they engender using EEG biosignals; a deep learning-based emotion recognition model is applied to extract emotional reactions. Through this experiment, we show that the negative emotional reactions associated with photographic surgical images are much higher than those caused by illustrated versions of identical images. We further execute a self-assessed user survey to prove that the emotions recognized from EEG signals effectively represent user-annotated emotions.


Sign in / Sign up

Export Citation Format

Share Document