scholarly journals A Hybrid EEG-Based Emotion Recognition Approach Using Wavelet Convolutional Neural Networks (WCNN) and Support Vector Machine

Author(s):  
Sara Bagherzadeh ◽  

Nowadays, deep learning and convolutional neural networks (CNNs) have become widespread tools in many biomedical engineering studies. CNN is an end-to-end tool which makes processing procedure integrated, but in some situations, this processing tool requires to be fused with machine learning methods to be more accurate. In this paper, a hybrid approach based on deep features extracted from Wavelet CNNs (WCNNs) weighted layers and multiclass support vector machine (MSVM) is proposed to improve recognition of emotional states from electroencephalogram (EEG) signals. First, EEG signals were preprocessed and converted to time-frequency (T-F) color representation or scalogram using the continuous wavelet transform (CWT) method. Then, scalograms were fed into four popular pre-trained CNNs, AlexNet, ResNet-18, VGG-19 and Inception-v3 to fine-tune them. Then, the best feature layer from each one was used as input to the MSVM method to classify four quarters of the valence-arousal model. Finally, subject-independent Leave-One-Subject-Out criterion was used to evaluate the proposed method on DEAP and MAHNOB-HCI databases. Results show that extracting deep features from the earlier convolutional layer of ResNet-18 (Res2a) and classifying using the MSVM increases the average accuracy, precision and recall about 20% and 12% for MAHNOB-HCI and DEAP databases, respectively. Also, combining scalograms from four regions of pre-frontal, frontal, parietal and parietal-occipital and two regions of frontal and parietal achieved the higher average accuracy of 77.47% and 87.45% for MAHNOB-HCI and DEAP databases, respectively. Combining CNN and MSVM increased recognition of emotion from EEG signal and results were comparable to state-of-the-art studies.

Symmetry ◽  
2019 ◽  
Vol 11 (9) ◽  
pp. 1151 ◽  
Author(s):  
Patalas-Maliszewska ◽  
Halikowski

(1) Background: Improving the management and effectiveness of employees’ learning processes within manufacturing companies has attracted a high level of attention in recent years, especially within the context of Industry 4.0. Convolutional Neural Networks with a Support Vector Machine (CNN-SVM) can be applied in this business field, in order to generate workplace procedures. To overcome the problem of usefully acquiring and sharing specialist knowledge, we use CNN-SVM to examine features from video material concerning each work activity for further comparison with the instruction picture’s features. (2) Methods: This paper uses literature studies and a selected workplace procedure: repairing a solid and using a fuel boiler as the benchmark dataset, which contains 20 s of training and a test video, in order to provide a reference model of features for a workplace procedure. In this model, the method used is also known as Convolutional Neural Networks with Support Vector Machine. This method effectively determines features for the further comparison and detection of objects. (3) Results: The innovative model for generating a workplace procedure, using CNN-SVM architecture, once built, can then be used to provide a learning process to the employees of manufacturing companies. The novelty of the proposed methodology is its architecture, which combines the acquisition of specialist knowledge and formalising and recording it in a useful form for new employees in the company. Moreover, three new algorithms were created: an algorithm to match features, an algorithm to detect each activity in the workplace procedure, and an algorithm to generate an activity scenario. (4) Conclusions: The efficiency of the proposed methodology can be demonstrated on a dataset comprising a collection of workplace procedures, such as the repair of the solid fuel boiler. We also highlighted the impracticality for managers of manufacturing companies to support learning processes in a company, resulting from a lack of resources to teach new employees.


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Mehmet Akif Ozdemir ◽  
Murside Degirmenci ◽  
Elif Izci ◽  
Aydin Akan

AbstractThe emotional state of people plays a key role in physiological and behavioral human interaction. Emotional state analysis entails many fields such as neuroscience, cognitive sciences, and biomedical engineering because the parameters of interest contain the complex neuronal activities of the brain. Electroencephalogram (EEG) signals are processed to communicate brain signals with external systems and make predictions over emotional states. This paper proposes a novel method for emotion recognition based on deep convolutional neural networks (CNNs) that are used to classify Valence, Arousal, Dominance, and Liking emotional states. Hence, a novel approach is proposed for emotion recognition with time series of multi-channel EEG signals from a Database for Emotion Analysis and Using Physiological Signals (DEAP). We propose a new approach to emotional state estimation utilizing CNN-based classification of multi-spectral topology images obtained from EEG signals. In contrast to most of the EEG-based approaches that eliminate spatial information of EEG signals, converting EEG signals into a sequence of multi-spectral topology images, temporal, spectral, and spatial information of EEG signals are preserved. The deep recurrent convolutional network is trained to learn important representations from a sequence of three-channel topographical images. We have achieved test accuracy of 90.62% for negative and positive Valence, 86.13% for high and low Arousal, 88.48% for high and low Dominance, and finally 86.23% for like–unlike. The evaluations of this method on emotion recognition problem revealed significant improvements in the classification accuracy when compared with other studies using deep neural networks (DNNs) and one-dimensional CNNs.


2018 ◽  
Vol 8 (11) ◽  
pp. 2086 ◽  
Author(s):  
Antonio-Javier Gallego ◽  
Antonio Pertusa ◽  
Jorge Calvo-Zaragoza

We present a hybrid approach to improve the accuracy of Convolutional Neural Networks (CNN) without retraining the model. The proposed architecture replaces the softmax layer by a k-Nearest Neighbor (kNN) algorithm for inference. Although this is a common technique in transfer learning, we apply it to the same domain for which the network was trained. Previous works show that neural codes (neuron activations of the last hidden layers) can benefit from the inclusion of classifiers such as support vector machines or random forests. In this work, our proposed hybrid CNN + kNN architecture is evaluated using several image datasets, network topologies and label noise levels. The results show significant accuracy improvements in the inference stage with respect to the standard CNN with noisy labels, especially with relatively large datasets such as CIFAR100. We also verify that applying the ℓ 2 norm on neural codes is statistically beneficial for this approach.


Information ◽  
2021 ◽  
Vol 12 (5) ◽  
pp. 187
Author(s):  
Shingchern D. You

In this paper, we study the use of EEG (Electroencephalography) to classify between concentrated and relaxed mental states. In the literature, most EEG recording systems are expensive, medical-graded devices. The expensive devices limit the availability in a consumer market. The EEG signals are obtained from a toy-grade EEG device with one channel of output data. The experiments are conducted in two runs, with 7 and 10 subjects, respectively. Each subject is asked to silently recite a five-digit number backwards given by the tester. The recorded EEG signals are converted to time-frequency representations by the software accompanying the device. A simple average is used to aggregate multiple spectral components into EEG bands, such as α, β, and γ bands. The chosen classifiers are SVM (support vector machine) and multi-layer feedforward network trained individually for each subject. Experimental results show that features, with α+β+γ bands and bandwidth 4 Hz, the average accuracy over all subjects in both runs can reach more than 80% and some subjects up to 90+% with the SVM classifier. The results suggest that a brain machine interface could be implemented based on the mental states of the user even with the use of a cheap EEG device.


Author(s):  
Alexis David Pascual ◽  
Kenneth McIsaac ◽  
Gordon Osinski

Autonomous image recognition has numerous potential applications in the field of planetary science and geology. For instance, having the ability to classify images of rocks would allow geologists to have immediate feedback without having to bring back samples to the laboratory. Also, planetary rovers could classify rocks in remote places and even in other planets without needing human intervention. Shu et al. classified 9 different types of rock images using a Support Vector Machine (SVM) with the image features extracted autonomously. Through this method, the authors achieved a test accuracy of 96.71%. In this research, Convolutional Neural Networks(CNN) have been used to classify the same set of rock images. Results show that a 3-layer network obtains an average accuracy of 99.60% across 10 trials on the test set. A version of Self-taught Learning was also implemented to prove the generalizability of the features extracted by the CNN. Finally, one model has been chosen to be deployed on a mobile device to demonstrate practicality and portability. The deployed model achieves a perfect classification accuracy on the test set, while taking only 0.068 seconds to make a prediction, equivalent to about 14 frames per second.


Sign in / Sign up

Export Citation Format

Share Document