scholarly journals GameEmo-CapsNet: Emotion Recognition from Single-Channel EEG Signals Using the 1D Capsule Networks

2021 ◽  
Vol 38 (6) ◽  
pp. 1689-1698
Author(s):  
Suat Toraman ◽  
Ömer Osman Dursun

Human emotion recognition with machine learning methods through electroencephalographic (EEG) signals has become a highly interesting subject for researchers. Although it is simple to define emotions that can be expressed physically such as speech, facial expressions, and gestures, it is more difficult to define psychological emotions that are expressed internally. The most important stimuli in revealing inner emotions are aural and visual stimuli. In this study, EEG signals using both aural and visual stimuli were examined and emotions were evaluated in both binary and multi-class emotion recognitions models. A general emotion recognition model was proposed for non-subject-based classification. Unlike in previous studies, a subject-based testing was performed for the first time in the literature. Capsule Networks, a new neural network model, has been developed for binary and multi-class emotion recognition. In the proposed method, a novel fusion strategy was introduced for binary-class emotion recognition and the model was tested using the GAMEEMO dataset. Binary-class emotion recognition achieved a classification accuracy which was 10% better than the classification performance achieved in other studies in the literature. Based on these findings, we suggest that the proposed method will bring a different perspective to emotion recognition.

Author(s):  
Mircea Zloteanu ◽  
Eva G. Krumhuber ◽  
Daniel C. Richardson

AbstractPeople are accurate at classifying emotions from facial expressions but much poorer at determining if such expressions are spontaneously felt or deliberately posed. We explored if the method used by senders to produce an expression influences the decoder’s ability to discriminate authenticity, drawing inspiration from two well-known acting techniques: the Stanislavski (internal) and Mimic method (external). We compared spontaneous surprise expressions in response to a jack-in-the-box (genuine condition), to posed displays of senders who either focused on their past affective state (internal condition) or the outward expression (external condition). Although decoders performed better than chance at discriminating the authenticity of all expressions, their accuracy was lower in classifying external surprise compared to internal surprise. Decoders also found it harder to discriminate external surprise from spontaneous surprise and were less confident in their decisions, perceiving these to be similarly intense but less genuine-looking. The findings suggest that senders are capable of voluntarily producing genuine-looking expressions of emotions with minimal effort, especially by mimicking a genuine expression. Implications for research on emotion recognition are discussed.


Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4723
Author(s):  
Patrícia Bota ◽  
Chen Wang ◽  
Ana Fred ◽  
Hugo Silva

Emotion recognition based on physiological data classification has been a topic of increasingly growing interest for more than a decade. However, there is a lack of systematic analysis in literature regarding the selection of classifiers to use, sensor modalities, features and range of expected accuracy, just to name a few limitations. In this work, we evaluate emotion in terms of low/high arousal and valence classification through Supervised Learning (SL), Decision Fusion (DF) and Feature Fusion (FF) techniques using multimodal physiological data, namely, Electrocardiography (ECG), Electrodermal Activity (EDA), Respiration (RESP), or Blood Volume Pulse (BVP). The main contribution of our work is a systematic study across five public datasets commonly used in the Emotion Recognition (ER) state-of-the-art, namely: (1) Classification performance analysis of ER benchmarking datasets in the arousal/valence space; (2) Summarising the ranges of the classification accuracy reported across the existing literature; (3) Characterising the results for diverse classifiers, sensor modalities and feature set combinations for ER using accuracy and F1-score; (4) Exploration of an extended feature set for each modality; (5) Systematic analysis of multimodal classification in DF and FF approaches. The experimental results showed that FF is the most competitive technique in terms of classification accuracy and computational complexity. We obtain superior or comparable results to those reported in the state-of-the-art for the selected datasets.


2020 ◽  
Vol 49 (3) ◽  
pp. 285-298
Author(s):  
Jian Zhang ◽  
Yihou Min

Human Emotion Recognition is of vital importance to realize human-computer interaction (HCI), while multichannel electroencephalogram (EEG) signals gradually replace other physiological signals and become the main basis of emotional recognition research with the development of brain-computer interface (BCI). However, the accuracy of emotional classification based on EEG signals under video stimulation is not stable, which may be related to the characteristics of  EEG signals before receiving stimulation. In this study, we extract the change of Differential Entropy (DE) before and after stimulation based on wavelet packet transform (WPT) to identify individual emotional state. Using the EEG emotion database DEAP, we divide the experimental EEG data in the database equally into 15 sets and extract their differential entropy on the basis of WPT. Then we calculate value of DE change of each separated EEG signal set. Finally, we divide the emotion into four categories in the two-dimensional valence-arousal emotional space by combining it with the integrated algorithm, Random Forest (RF). The simulation results show that the WPT-RF model established by this method greatly improves the recognition rate of EEG signal, with an average classification accuracy of 87.3%. In addition, we use WPT-RF model to train individual subjects, and the classification accuracy reached 97.7%.


i-Perception ◽  
2020 ◽  
Vol 11 (5) ◽  
pp. 204166952096111
Author(s):  
Gunnar Schmidtmann ◽  
Andrew J. Logan ◽  
Claus-Christian Carbon ◽  
Joshua T. Loong ◽  
Ian Gold

Faces provide not only cues to an individual’s identity, age, gender, and ethnicity but also insight into their mental states. The aim was to investigate the temporal aspects of processing of facial expressions of complex mental states for very short presentation times ranging from 12.5 to 100 ms in a four-alternative forced choice paradigm based on Reading the Mind in the Eyes test. Results show that participants are able to recognise very subtle differences between facial expressions; performance is better than chance, even for the shortest presentation time. Importantly, we show for the first time that observers can recognise these expressions based on information contained in the eye region only. These results support the hypothesis that the eye region plays a particularly important role in social interactions and that the expressions in the eyes are a rich source of information about other peoples’ mental states. When asked to what extent the observers guessed during the task, they significantly underestimated their ability to make correct decisions, yet perform better than chance, even for very brief presentation times. These results are particularly relevant in the light of the current COVID-19 pandemic and the associated wearing of face coverings.


2018 ◽  
Vol 30 (04) ◽  
pp. 1850026 ◽  
Author(s):  
Morteza Zangeneh Soroush ◽  
Keivan Maghooli ◽  
Seyed Kamaledin Setarehdan ◽  
Ali Motie Nasrabadi

These days, emotion recognition has been receiving more attention due to the growth of the brain–computer interfaces (systems) (BCIs). Moreover, estimating emotions is widely used in different aspects such as psychology, neuroscience, entertainment, e-learning, etc. This paper aims to classify emotions through EEG signals. When it comes to emotion recognition, participants’ opinions toward induced emotions are really case-dependent and thus corresponding labels might be imprecise and uncertain. Furthermore, it is acceptable that mixture classifiers lead to higher accuracy (ACE) and lower uncertainty. This paper, introduces new methods, including setting time intervals to process EEG signals, extracting relative values of nonlinear features and classifying them through Dempster–Shafer theory (DST) of evidence method. In this work, we used EEG signals which are taken from a very reliable database and the extracted features are classified by DST in order to reduce uncertainty and consequently achieve better results. First, time windows are determined based on signal complexity. Then, nonlinear features are extracted. Actually, this paper suggests feature variability through time intervals instead of absolute values of features and discriminant features are selected using genetic algorithm (GA). Finally, data is fed in the classification process and different classifiers are combined through DST. 10-fold cross-validation is applied and the results are extracted and compared with some basic classifiers. We managed to achieve high classification performance in terms of emotion recognition [Formula: see text]. Results prove that EEG signals can reflect emotional responses of the brain and the proposed method is effective which gives considerably precise estimation of emotions.


Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5218 ◽  
Author(s):  
Muhammad Adeel Asghar ◽  
Muhammad Jamil Khan ◽  
Fawad ◽  
Yasar Amin ◽  
Muhammad Rizwan ◽  
...  

Much attention has been paid to the recognition of human emotions with the help of electroencephalogram (EEG) signals based on machine learning technology. Recognizing emotions is a challenging task due to the non-linear property of the EEG signal. This paper presents an advanced signal processing method using the deep neural network (DNN) for emotion recognition based on EEG signals. The spectral and temporal components of the raw EEG signal are first retained in the 2D Spectrogram before the extraction of features. The pre-trained AlexNet model is used to extract the raw features from the 2D Spectrogram for each channel. To reduce the feature dimensionality, spatial, and temporal based, bag of deep features (BoDF) model is proposed. A series of vocabularies consisting of 10 cluster centers of each class is calculated using the k-means cluster algorithm. Lastly, the emotion of each subject is represented using the histogram of the vocabulary set collected from the raw-feature of a single channel. Features extracted from the proposed BoDF model have considerably smaller dimensions. The proposed model achieves better classification accuracy compared to the recently reported work when validated on SJTU SEED and DEAP data sets. For optimal classification performance, we use a support vector machine (SVM) and k-nearest neighbor (k-NN) to classify the extracted features for the different emotional states of the two data sets. The BoDF model achieves 93.8% accuracy in the SEED data set and 77.4% accuracy in the DEAP data set, which is more accurate compared to other state-of-the-art methods of human emotion recognition.


2021 ◽  
Vol 11 (11) ◽  
pp. 1424
Author(s):  
Yuhong Zhang ◽  
Yuan Liao ◽  
Yudi Zhang ◽  
Liya Huang

In order to avoid erroneous braking responses when vehicle drivers are faced with a stressful setting, a K-order propagation number algorithm–Feature selection–Classification System (KFCS)is developed in this paper to detect emergency braking intentions in simulated driving scenarios using electroencephalography (EEG) signals. Two approaches are employed in KFCS to extract EEG features and to improve classification performance: the K-Order Propagation Number Algorithm is the former, calculating the node importance from the perspective of brain networks as a novel approach; the latter uses a set of feature extraction algorithms to adjust the thresholds. Working with the data collected from seven subjects, the highest classification accuracy of a single trial can reach over 90%, with an overall accuracy of 83%. Furthermore, this paper attempts to investigate the mechanisms of brain activeness under two scenarios by using a topography technique at the sensor-data level. The results suggest that the active regions at two states is different, which leaves further exploration for future investigations.


Sign in / Sign up

Export Citation Format

Share Document