EEG electrodes selection for emotion recognition independent of stimulus presentation paradigms

2021 ◽  
pp. 1-17
Author(s):  
Naveen Masood ◽  
Humera Farooq

Most of the electroencephalography (EEG) based emotion recognition systems rely on single stimulus to evoke emotions. EEG data is mostly recorded with higher number of electrodes that can lead to data redundancy and longer experimental setup time. The question “whether the configuration with lesser number of electrodes is common amongst different stimuli presentation paradigms” remains unanswered. There are publicly available datasets for EEG based human emotional states recognition. Since this work is focused towards classifying emotions while subjects are experiencing different stimuli, therefore we need to perform new experiments. Keeping aforementioned issues in consideration, this work presents a novel experimental study that records EEG data for three different human emotional states evoked with four different stimuli presentation paradigms. A methodology based on iterative Genetic Algorithm in combination with majority voting has been used to achieve configuration with reduced number of EEG electrodes keeping in consideration minimum loss of classification accuracy. The results obtained are comparable with recent studies. Stimulus independent configurations with lesser number of electrodes lead towards low computational complexity as well as reduced set up time for future EEG based smart systems for emotions recognition

2021 ◽  
Vol 11 (6) ◽  
pp. 696
Author(s):  
Naveen Masood ◽  
Humera Farooq

Most electroencephalography (EEG)-based emotion recognition systems rely on a single stimulus to evoke emotions. These systems make use of videos, sounds, and images as stimuli. Few studies have been found for self-induced emotions. The question “if different stimulus presentation paradigms for same emotion, produce any subject and stimulus independent neural correlates” remains unanswered. Furthermore, we found that there are publicly available datasets that are used in a large number of studies targeting EEG-based human emotional state recognition. Since one of the major concerns and contributions of this work is towards classifying emotions while subjects experience different stimulus-presentation paradigms, we need to perform new experiments. This paper presents a novel experimental study that recorded EEG data for three different human emotional states evoked with four different stimuli presentation paradigms. Fear, neutral, and joy have been considered as three emotional states. In this work, features were extracted with common spatial pattern (CSP) from recorded EEG data and classified through linear discriminant analysis (LDA). The considered emotion-evoking paradigms included emotional imagery, pictures, sounds, and audio–video movie clips. Experiments were conducted with twenty-five participants. Classification performance in different paradigms was evaluated, considering different spectral bands. With a few exceptions, all paradigms showed the best emotion recognition for higher frequency spectral ranges. Interestingly, joy emotions were classified more strongly as compared to fear. The average neural patterns for fear vs. joy emotional states are presented with topographical maps based on spatial filters obtained with CSP for averaged band power changes for all four paradigms. With respect to the spectral bands, beta and alpha oscillation responses produced the highest number of significant results for the paradigms under consideration. With respect to brain region, the frontal lobe produced the most significant results irrespective of paradigms and spectral bands. The temporal site also played an effective role in generating statistically significant findings. To the best of our knowledge, no study has been conducted for EEG emotion recognition while considering four different stimuli paradigms. This work provides a good contribution towards designing EEG-based system for human emotion recognition that could work effectively in different real-time scenarios.


2021 ◽  
Author(s):  
Talieh Seyed Tabtabae

Automatic Emotion Recognition (AER) is an emerging research area in the Human-Computer Interaction (HCI) field. As Computers are becoming more and more popular every day, the study of interaction between humans (users) and computers is catching more attention. In order to have a more natural and friendly interface between humans and computers, it would be beneficial to give computers the ability to recognize situations the same way a human does. Equipped with an emotion recognition system, computers will be able to recognize their users' emotional state and show the appropriate reaction to that. In today's HCI systems, machines can recognize the speaker and also content of the speech, using speech recognition and speaker identification techniques. If machines are equipped with emotion recognition techniques, they can also know "how it is said" to react more appropriately, and make the interaction more natural. One of the most important human communication channels is the auditory channel which carries speech and vocal intonation. In fact people can perceive each other's emotional state by the way they talk. Therefore in this work the speech signals are analyzed in order to set up an automatic system which recognizes the human emotional state. Six discrete emotional states have been considered and categorized in this research: anger, happiness, fear, surprise, sadness, and disgust. A set of novel spectral features are proposed in this contribution. Two approaches are applied and the results are compared. In the first approach, all the acoustic features are extracted from consequent frames along the speech signals. The statistical values of features are considered to constitute the features vectors. Suport Vector Machine (SVM), which is a relatively new approach in the field of machine learning is used to classify the emotional states. In the second approach, spectral features are extracted from non-overlapping logarithmically-spaced frequency sub-bands. In order to make use of all the extracted information, sequence discriminant SVMs are adopted. The empirical results show that the employed techniques are very promising.


2019 ◽  
Vol 18 (04) ◽  
pp. 1359-1378
Author(s):  
Jianzhuo Yan ◽  
Hongzhi Kuai ◽  
Jianhui Chen ◽  
Ning Zhong

Emotion recognition is a highly noteworthy and challenging work in both cognitive science and affective computing. Currently, neurobiology studies have revealed the partially synchronous oscillating phenomenon within brain, which needs to be analyzed from oscillatory synchronization. This combination of oscillations and synchronism is worthy of further exploration to achieve inspiring learning of the emotion recognition models. In this paper, we propose a novel approach of valence and arousal-based emotion recognition using EEG data. First, we construct the emotional oscillatory brain network (EOBN) inspired by the partially synchronous oscillating phenomenon for emotional valence and arousal. And then, a coefficient of variation and Welch’s [Formula: see text]-test based feature selection method is used to identify the core pattern (cEOBN) within EOBN for different emotional dimensions. Finally, an emotional recognition model (ERM) is built by combining cEOBN-inspired information obtained in the above process and different classifiers. The proposed approach can combine oscillation and synchronization characteristics of multi-channel EEG signals for recognizing different emotional states under the valence and arousal dimensions. The cEOBN-based inspired information can effectively reduce the dimensionality of the data. The experimental results show that the previous method can be used to detect affective state at a reasonable level of accuracy.


2021 ◽  
pp. 1-12
Author(s):  
A. Cagri Tolga ◽  
Murat Basar

By 2050, the global population is estimated to rise to over 9 billion people, and the global food need is expected to ascend 50%. Moreover, by cause of climate change, agricultural production may decrease by 10%. Since cultivable land is constant, multi-layered farms are feasible alternatives to yield extra food from the unit land. Smart systems are logical options to assist production in these factory-like farms. When the amount of food grown per season is assessed, a single indoor hectare of a vertical farm could deliver yield equal to more than 30 hectares of land consuming 70% less water with nearly zero usage of pesticides. In this study, we evaluated technology selection for three vertical farm alternatives via MCDM methods. Even though commercial vertical farms are set up in several countries, area is still fresh and acquiring precise data is difficult. Therefore, we employed fuzzy logic as much as possible to overcome related uncertainties. WEDBA (Weighted Euclidean Distance Based Approximation) and MACBETH (Measuring Attractiveness by a Categorical Based Evaluation Technique) methods are employed to evaluate alternatives.


Author(s):  
Benjamin Aribisala ◽  
Obaro Olori ◽  
Patrick Owate

Introduction: Emotion plays a key role in our daily life and work, especially in decision making, as people's moods can influence their mode of communication, behaviour or productivity. Emotion recognition has attracted some research works and medical imaging technology offers tools for emotion classification. Aims: The aim of this work is to develop a machine learning technique for recognizing emotion based on Electroencephalogram (EEG) data Materials and Methods: Experimentation was based on a publicly available EEG Dataset for Emotion Analysis using Physiological (DEAP). The data comprises of EEG signals acquired from thirty two adults while watching forty 40 different musical video clips of one minute each. Participants rated each video in terms of four emotional states, namely, arousal, valence, like/dislike and dominance. We extracted some features from the dataset using Discrete Wavelet Transforms to extract wavelet energy, wavelet entropy, and standard deviation. We then classified the extracted features into four emotional states, namely, High Valence/High Arousal, High Valance/Low Arousal, Low Valence/High Arousal, and Low Valence/Low Arousal using Ensemble Bagged Trees. Results: Ensemble Bagged Trees gave sensitivity, specificity, and accuracy of 97.54%, 99.21%, and 97.80% respectively. Support Vector Machine and Ensemble Boosted Tree gave similar results. Conclusion: Our results showed that machine learning classification of emotion using EEG data is very promising. This can help in the treatment of patients, especially those with expression problems like Amyotrophic Lateral Sclerosis which is a muscle disease, the real emotional state of patients will help doctors to provide appropriate medical care. Keywords: Electroencephalogram, Emotions Recognition, Ensemble Classification, Ensemble Bagged Trees, Machine Learning


Emotional recognition plays a vital role in the behavioral and emotional interactions between humans. It is a difficult task because it relies on the prediction of abstract emotional states from multimodal input data. Emotion recognition systems operate in three phases. A first that consists of taking input data from the real world through sensors. Then extract the emotional characteristics to predict the emotion. To do this, methods are used to exaction and classification. Deep learning methods allow recognition in different ways. In this article, we are interested in facial expression. We proceed to the extraction of emotional characteristics expressed on the face in two ways by two different methods. On the one hand, we use Gabor filters to extract textures and facial appearances for different scales and orientations. On the other hand, we extract movements of the face muscles namely eyes, eyebrows, nose and mouth. Then we make an entire classification using the convolutional neural networks (CNN) and then a decision-level merge. The convolutional network model has been training and validating on datasets.


2021 ◽  
Author(s):  
Talieh Seyed Tabtabae

Automatic Emotion Recognition (AER) is an emerging research area in the Human-Computer Interaction (HCI) field. As Computers are becoming more and more popular every day, the study of interaction between humans (users) and computers is catching more attention. In order to have a more natural and friendly interface between humans and computers, it would be beneficial to give computers the ability to recognize situations the same way a human does. Equipped with an emotion recognition system, computers will be able to recognize their users' emotional state and show the appropriate reaction to that. In today's HCI systems, machines can recognize the speaker and also content of the speech, using speech recognition and speaker identification techniques. If machines are equipped with emotion recognition techniques, they can also know "how it is said" to react more appropriately, and make the interaction more natural. One of the most important human communication channels is the auditory channel which carries speech and vocal intonation. In fact people can perceive each other's emotional state by the way they talk. Therefore in this work the speech signals are analyzed in order to set up an automatic system which recognizes the human emotional state. Six discrete emotional states have been considered and categorized in this research: anger, happiness, fear, surprise, sadness, and disgust. A set of novel spectral features are proposed in this contribution. Two approaches are applied and the results are compared. In the first approach, all the acoustic features are extracted from consequent frames along the speech signals. The statistical values of features are considered to constitute the features vectors. Suport Vector Machine (SVM), which is a relatively new approach in the field of machine learning is used to classify the emotional states. In the second approach, spectral features are extracted from non-overlapping logarithmically-spaced frequency sub-bands. In order to make use of all the extracted information, sequence discriminant SVMs are adopted. The empirical results show that the employed techniques are very promising.


2021 ◽  
Vol 11 (7) ◽  
pp. 835
Author(s):  
Alexander Rokos ◽  
Richard Mah ◽  
Rober Boshra ◽  
Amabilis Harrison ◽  
Tsee Leng Choy ◽  
...  

A consistent limitation when designing event-related potential paradigms and interpreting results is a lack of consideration of the multivariate factors that affect their elicitation and detection in behaviorally unresponsive individuals. This paper provides a retrospective commentary on three factors that influence the presence and morphology of long-latency event-related potentials—the P3b and N400. We analyze event-related potentials derived from electroencephalographic (EEG) data collected from small groups of healthy youth and healthy elderly to illustrate the effect of paradigm strength and subject age; we analyze ERPs collected from an individual with severe traumatic brain injury to illustrate the effect of stimulus presentation speed. Based on these critical factors, we support that: (1) the strongest paradigms should be used to elicit event-related potentials in unresponsive populations; (2) interpretation of event-related potential results should account for participant age; and (3) speed of stimulus presentation should be slower in unresponsive individuals. The application of these practices when eliciting and recording event-related potentials in unresponsive individuals will help to minimize result interpretation ambiguity, increase confidence in conclusions, and advance the understanding of the relationship between long-latency event-related potentials and states of consciousness.


2005 ◽  
Vol 6 (1) ◽  
pp. 63-81 ◽  
Author(s):  
Dymitr Ruta ◽  
Bogdan Gabrys

Sign in / Sign up

Export Citation Format

Share Document