Human emotion recognition through short time Electroencephalogram (EEG) signals using Fast Fourier Transform (FFT)

Author(s):  
M. Murugappan ◽  
S. Murugappan
2020 ◽  
Vol 5 (6) ◽  
pp. 1082-1088
Author(s):  
Anton Yudhana ◽  
Akbar Muslim ◽  
Dewi Eko Wati ◽  
Intan Puspitasari ◽  
Ahmad Azhari ◽  
...  

Mekatronika ◽  
2020 ◽  
Vol 2 (1) ◽  
pp. 1-7
Author(s):  
Jothi Letchumy Mahendra Kumar ◽  
Mamunur Rashid ◽  
Rabiu Muazu Musa ◽  
Mohd Azraai Mohd Razman ◽  
Norizam Sulaiman ◽  
...  

Brain Computer-Interfaces (BCI) offers a means of controlling prostheses for neurological disorder patients, primarily owing to their inability to control such devices due to their inherent physical limitations. More often than not, the control of such devices exploits the use of Electroencephalogram (EEG) signals. Nonetheless, it is worth noting that the extraction of the features is often a laborious undertaking. The use of Transfer Learning (TL) has been demonstrated to be able to mitigate the issue. However, the employment of such a method towards BCI applications, particularly with regards to EEG signals are limited. The present study aims to assess the effectiveness of a number of DenseNet TL models, viz. DenseNet169, DenseNet121 and DenseNet201 in extracting features for the classification of wink-based EEG signals. The extracted features are then classified through an optimised Random Forest (RF) classifier. The raw EEG signals are transformed into a spectrogram image via Fast Fourier Transform (FFT) before it was fed into selected TL models. The dataset was split with a stratified ratio of 60:20:20 into train, test, and validation datasets, respectively. The hyperparameters of the RF model was optimised through the grid search approach that utilises the five-fold cross-validation technique. It was established from the study that amongst the DenseNet pipelines evaluated, the DenseNet169 performed the best with an overall validation and test accuracy of 89%. The findings of the present investigation could facilitate BCI applications, e.g., for a grasping exoskeleton.


2015 ◽  
Vol 12 (03) ◽  
pp. 1550021 ◽  
Author(s):  
M. A. Al-Manie ◽  
W. J. Wang

Due to the advantages offered by the S-transform (ST) distribution, it has been recently successfully implemented for various applications such as seismic and image processing. The desirable properties of the ST include a globally referenced phase as the case with the short time Fourier transform (STFT) while offering a higher spectral resolution as the wavelet transform (WT). However, this estimator suffers from some inherent disadvantages seen as poor energy concentration with higher frequencies. In order to improve the performance of the distribution, a modification to the existing technique is proposed. Additional parameters are proposed to control the window's width which can greatly enhance the signal representation in the time–frequency plane. The new estimator's performance is evaluated using synthetic signals as well as biomedical data. The required features of the ST which include invertability and phase information are still preserved.


Sensors ◽  
2019 ◽  
Vol 19 (13) ◽  
pp. 2854 ◽  
Author(s):  
Kwon-Woo Ha ◽  
Jin-Woo Jeong

Various convolutional neural network (CNN)-based approaches have been recently proposed to improve the performance of motor imagery based-brain-computer interfaces (BCIs). However, the classification accuracy of CNNs is compromised when target data are distorted. Specifically for motor imagery electroencephalogram (EEG), the measured signals, even from the same person, are not consistent and can be significantly distorted. To overcome these limitations, we propose to apply a capsule network (CapsNet) for learning various properties of EEG signals, thereby achieving better and more robust performance than previous CNN methods. The proposed CapsNet-based framework classifies the two-class motor imagery, namely right-hand and left-hand movements. The motor imagery EEG signals are first transformed into 2D images using the short-time Fourier transform (STFT) algorithm and then used for training and testing the capsule network. The performance of the proposed framework was evaluated on the BCI competition IV 2b dataset. The proposed framework outperformed state-of-the-art CNN-based methods and various conventional machine learning approaches. The experimental results demonstrate the feasibility of the proposed approach for classification of motor imagery EEG signals.


2020 ◽  
Vol 49 (3) ◽  
pp. 285-298
Author(s):  
Jian Zhang ◽  
Yihou Min

Human Emotion Recognition is of vital importance to realize human-computer interaction (HCI), while multichannel electroencephalogram (EEG) signals gradually replace other physiological signals and become the main basis of emotional recognition research with the development of brain-computer interface (BCI). However, the accuracy of emotional classification based on EEG signals under video stimulation is not stable, which may be related to the characteristics of  EEG signals before receiving stimulation. In this study, we extract the change of Differential Entropy (DE) before and after stimulation based on wavelet packet transform (WPT) to identify individual emotional state. Using the EEG emotion database DEAP, we divide the experimental EEG data in the database equally into 15 sets and extract their differential entropy on the basis of WPT. Then we calculate value of DE change of each separated EEG signal set. Finally, we divide the emotion into four categories in the two-dimensional valence-arousal emotional space by combining it with the integrated algorithm, Random Forest (RF). The simulation results show that the WPT-RF model established by this method greatly improves the recognition rate of EEG signal, with an average classification accuracy of 87.3%. In addition, we use WPT-RF model to train individual subjects, and the classification accuracy reached 97.7%.


2019 ◽  
Vol 7 (10) ◽  
pp. 43-47
Author(s):  
Shinde Ashok R. ◽  
Agnihotri Prashant P. ◽  
Raut S.D. ◽  
Khanale Prakash B.

Author(s):  
Pedro Miguel Rodrigues ◽  
João Paulo Teixeira

Alzheimer’s Disease (AD) is the most common cause of dementia, and is well known for its affect on memory loss and other intellectual abilities. The Electroencephalogram (EEG) has been used as a diagnosis tool for dementia for several decades. The main objective of this work was to develop an Artificial Neural Network (ANN) to classify EEG signals between AD patients and control subjects. For this purpose, two different methodologies and variations were used. The Short Time Fourier Transform (STFT) was applied to one of the methodologies and the Wavelet Transform (WT) was applied to the other methodology. The studied features of the EEG signals were the Relative Power in conventional EEG bands and their associated Spectral Ratios (r1, r2, r3, and r4). The best classification was performed by the ANN using the WT Biorthogonal 3.5 with AROC of 0.97, Sensitivity of 92.1%, Specificity of 90.8%, and 91.5% of Accuracy.


Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7103
Author(s):  
Heekyung Yang ◽  
Jongdae Han ◽  
Kyungha Min

Electroencephalogram (EEG) biosignals are widely used to measure human emotional reactions. The recent progress of deep learning-based classification models has improved the accuracy of emotion recognition in EEG signals. We apply a deep learning-based emotion recognition model from EEG biosignals to prove that illustrated surgical images reduce the negative emotional reactions that the photographic surgical images generate. The strong negative emotional reactions caused by surgical images, which show the internal structure of the human body (including blood, flesh, muscle, fatty tissue, and bone) act as an obstacle in explaining the images to patients or communicating with the images with non-professional people. We claim that the negative emotional reactions generated by illustrated surgical images are less severe than those caused by raw surgical images. To demonstrate the difference in emotional reaction, we produce several illustrated surgical images from photographs and measure the emotional reactions they engender using EEG biosignals; a deep learning-based emotion recognition model is applied to extract emotional reactions. Through this experiment, we show that the negative emotional reactions associated with photographic surgical images are much higher than those caused by illustrated versions of identical images. We further execute a self-assessed user survey to prove that the emotions recognized from EEG signals effectively represent user-annotated emotions.


Sign in / Sign up

Export Citation Format

Share Document