scholarly journals A Review on the Computational Methods for Emotional State Estimation from the Human EEG

2013 ◽  
Vol 2013 ◽  
pp. 1-13 ◽  
Author(s):  
Min-Ki Kim ◽  
Miyoung Kim ◽  
Eunmi Oh ◽  
Sung-Phil Kim

A growing number of affective computing researches recently developed a computer system that can recognize an emotional state of the human user to establish affective human-computer interactions. Various measures have been used to estimate emotional states, including self-report, startle response, behavioral response, autonomic measurement, and neurophysiologic measurement. Among them, inferring emotional states from electroencephalography (EEG) has received considerable attention as EEG could directly reflect emotional states with relatively low costs and simplicity. Yet, EEG-based emotional state estimation requires well-designed computational methods to extract information from complex and noisy multichannel EEG data. In this paper, we review the computational methods that have been developed to deduct EEG indices of emotion, to extract emotion-related features, or to classify EEG signals into one of many emotional states. We also propose using sequential Bayesian inference to estimate the continuous emotional state in real time. We present current challenges for building an EEG-based emotion recognition system and suggest some future directions.  

2021 ◽  
Author(s):  
Talieh Seyed Tabtabae

Automatic Emotion Recognition (AER) is an emerging research area in the Human-Computer Interaction (HCI) field. As Computers are becoming more and more popular every day, the study of interaction between humans (users) and computers is catching more attention. In order to have a more natural and friendly interface between humans and computers, it would be beneficial to give computers the ability to recognize situations the same way a human does. Equipped with an emotion recognition system, computers will be able to recognize their users' emotional state and show the appropriate reaction to that. In today's HCI systems, machines can recognize the speaker and also content of the speech, using speech recognition and speaker identification techniques. If machines are equipped with emotion recognition techniques, they can also know "how it is said" to react more appropriately, and make the interaction more natural. One of the most important human communication channels is the auditory channel which carries speech and vocal intonation. In fact people can perceive each other's emotional state by the way they talk. Therefore in this work the speech signals are analyzed in order to set up an automatic system which recognizes the human emotional state. Six discrete emotional states have been considered and categorized in this research: anger, happiness, fear, surprise, sadness, and disgust. A set of novel spectral features are proposed in this contribution. Two approaches are applied and the results are compared. In the first approach, all the acoustic features are extracted from consequent frames along the speech signals. The statistical values of features are considered to constitute the features vectors. Suport Vector Machine (SVM), which is a relatively new approach in the field of machine learning is used to classify the emotional states. In the second approach, spectral features are extracted from non-overlapping logarithmically-spaced frequency sub-bands. In order to make use of all the extracted information, sequence discriminant SVMs are adopted. The empirical results show that the employed techniques are very promising.


2019 ◽  
Vol 18 (04) ◽  
pp. 1359-1378
Author(s):  
Jianzhuo Yan ◽  
Hongzhi Kuai ◽  
Jianhui Chen ◽  
Ning Zhong

Emotion recognition is a highly noteworthy and challenging work in both cognitive science and affective computing. Currently, neurobiology studies have revealed the partially synchronous oscillating phenomenon within brain, which needs to be analyzed from oscillatory synchronization. This combination of oscillations and synchronism is worthy of further exploration to achieve inspiring learning of the emotion recognition models. In this paper, we propose a novel approach of valence and arousal-based emotion recognition using EEG data. First, we construct the emotional oscillatory brain network (EOBN) inspired by the partially synchronous oscillating phenomenon for emotional valence and arousal. And then, a coefficient of variation and Welch’s [Formula: see text]-test based feature selection method is used to identify the core pattern (cEOBN) within EOBN for different emotional dimensions. Finally, an emotional recognition model (ERM) is built by combining cEOBN-inspired information obtained in the above process and different classifiers. The proposed approach can combine oscillation and synchronization characteristics of multi-channel EEG signals for recognizing different emotional states under the valence and arousal dimensions. The cEOBN-based inspired information can effectively reduce the dimensionality of the data. The experimental results show that the previous method can be used to detect affective state at a reasonable level of accuracy.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4520 ◽  
Author(s):  
Uria-Rivas ◽  
Rodriguez-Sanchez ◽  
Santos ◽  
Vaquero ◽  
Boticario

Physiological sensors can be used to detect changes in the emotional state of users with affective computing. This has lately been applied in the educational domain, aimed to better support learners during the learning process. For this purpose, we have developed the AICARP (Ambient Intelligence Context-aware Affective Recommender Platform) infrastructure, which detects changes in the emotional state of the user and provides personalized multisensorial support to help manage the emotional state by taking advantage of ambient intelligence features. We have developed a third version of this infrastructure, AICARP.V3, which addresses several problems detected in the data acquisition stage of the second version, (i.e., intrusion of the pulse sensor, poor resolution and low signal to noise ratio in the galvanic skin response sensor and slow response time of the temperature sensor) and extends the capabilities to integrate new actuators. This improved incorporates a new acquisition platform (shield) called PhyAS (Physiological Acquisition Shield), which reduces the number of control units to only one, and supports both gathering physiological signals with better precision and delivering multisensory feedback with more flexibility, by means of new actuators that can be added/discarded on top of just that single shield. The improvements in the quality of the acquired signals allow better recognition of the emotional states. Thereof, AICARP.V3 gives a more accurate personalized emotional support to the user, based on a rule-based approach that triggers multisensorial feedback, if necessary. This represents progress in solving an open problem: develop systems that perform as effectively as a human expert in a complex task such as the recognition of emotional states.


2021 ◽  
Author(s):  
Puja A. Chavan ◽  
Sharmishta Desai

Emotion awareness is one of the most important subjects in the field of affective computing. Using nonverbal behavioral methods such as recognition of facial expression, verbal behavioral method, recognition of speech emotion, or physiological signals-based methods such as recognition of emotions based on electroencephalogram (EEG) can predict human emotion. However, it is notable that data obtained from either nonverbal or verbal behaviors are indirect emotional signals suggesting brain activity. Unlike the nonverbal or verbal actions, EEG signals are reported directly from the human brain cortex and thus may be more effective in representing the inner emotional states of the brain. Consequently, when used to measure human emotion, the use of EEG data can be more accurate than data on behavior. For this reason, the identification of human emotion from EEG signals has become a very important research subject in current emotional brain-computer interfaces (BCIs) aimed at inferring human emotional states based on the EEG signals recorded. In this paper, a hybrid deep learning approach has proposed using CNN and a long short-term memory (LSTM) algorithm is investigated for the purpose of automatic classification of epileptic disease from EEG signals. The signals have been processed by CNN for feature extraction from runtime environment while LSTM has used for classification of entire data. Finally, system demonstrates each EEG data file as normal or epileptic disease. In this research to describes a state of art for effective epileptic disease detection prediction and classification using hybrid deep learning algorithms. This research demonstrates a collaboration of CNN and LSTM for entire classification of EEG signals in numerous existing systems.


2021 ◽  
Author(s):  
Krzysztof Kotowski ◽  
Katarzyna Stapor

Defining “emotion” and its accurate measuring is a notorious problem in the psychology domain. It is usually addressed with subjective self-assessment forms filled manually by participants. Machine learning methods and EEG correlates of emotions enable to construction of automatic systems for objective emotion recognition. Such systems could help to assess emotional states and could be used to improve emotional perception. In this chapter, we present a computer system that can automatically recognize an emotional state of a human, based on EEG signals induced by a standardized affective picture database. Based on the EEG signal, trained deep neural networks are then used together with mappings between emotion models to predict the emotions perceived by the participant. This, in turn, can be used for example in validation of affective picture databases standardization.


2021 ◽  
Author(s):  
Talieh Seyed Tabtabae

Automatic Emotion Recognition (AER) is an emerging research area in the Human-Computer Interaction (HCI) field. As Computers are becoming more and more popular every day, the study of interaction between humans (users) and computers is catching more attention. In order to have a more natural and friendly interface between humans and computers, it would be beneficial to give computers the ability to recognize situations the same way a human does. Equipped with an emotion recognition system, computers will be able to recognize their users' emotional state and show the appropriate reaction to that. In today's HCI systems, machines can recognize the speaker and also content of the speech, using speech recognition and speaker identification techniques. If machines are equipped with emotion recognition techniques, they can also know "how it is said" to react more appropriately, and make the interaction more natural. One of the most important human communication channels is the auditory channel which carries speech and vocal intonation. In fact people can perceive each other's emotional state by the way they talk. Therefore in this work the speech signals are analyzed in order to set up an automatic system which recognizes the human emotional state. Six discrete emotional states have been considered and categorized in this research: anger, happiness, fear, surprise, sadness, and disgust. A set of novel spectral features are proposed in this contribution. Two approaches are applied and the results are compared. In the first approach, all the acoustic features are extracted from consequent frames along the speech signals. The statistical values of features are considered to constitute the features vectors. Suport Vector Machine (SVM), which is a relatively new approach in the field of machine learning is used to classify the emotional states. In the second approach, spectral features are extracted from non-overlapping logarithmically-spaced frequency sub-bands. In order to make use of all the extracted information, sequence discriminant SVMs are adopted. The empirical results show that the employed techniques are very promising.


2017 ◽  
Vol 76 (2) ◽  
pp. 71-79 ◽  
Author(s):  
Hélène Maire ◽  
Renaud Brochard ◽  
Jean-Luc Kop ◽  
Vivien Dioux ◽  
Daniel Zagar

Abstract. This study measured the effect of emotional states on lexical decision task performance and investigated which underlying components (physiological, attentional orienting, executive, lexical, and/or strategic) are affected. We did this by assessing participants’ performance on a lexical decision task, which they completed before and after an emotional state induction task. The sequence effect, usually produced when participants repeat a task, was significantly smaller in participants who had received one of the three emotion inductions (happiness, sadness, embarrassment) than in control group participants (neutral induction). Using the diffusion model ( Ratcliff, 1978 ) to resolve the data into meaningful parameters that correspond to specific psychological components, we found that emotion induction only modulated the parameter reflecting the physiological and/or attentional orienting components, whereas the executive, lexical, and strategic components were not altered. These results suggest that emotional states have an impact on the low-level mechanisms underlying mental chronometric tasks.


2010 ◽  
Vol 24 (1) ◽  
pp. 33-40 ◽  
Author(s):  
Miroslaw Wyczesany ◽  
Jan Kaiser ◽  
Anton M. L. Coenen

The study determines the associations between self-report of ongoing emotional state and EEG patterns. A group of 31 hospitalized patients were enrolled with three types of diagnosis: major depressive disorder, manic episode of bipolar affective disorder, and nonaffective patients. The Thayer ADACL checklist, which yields two subjective dimensions, was used for the assessment of affective state: Energy Tiredness (ET) and Tension Calmness (TC). Quantitative analysis of EEG was based on EEG spectral power and laterality coefficient (LC). Only the ET scale showed relationships with the laterality coefficient. The high-energy group showed right shift of activity in frontocentral and posterior areas visible in alpha and beta range, respectively. No effect of ET estimation on prefrontal asymmetry was observed. For the TC scale, an estimation of high tension was related to right prefrontal dominance and right posterior activation in beta1 band. Also, decrease of alpha2 power together with increase of beta2 power was observed over the entire scalp.


1999 ◽  
Vol 13 (1) ◽  
pp. 18-26 ◽  
Author(s):  
Rudolf Stark ◽  
Alfons Hamm ◽  
Anne Schienle ◽  
Bertram Walter ◽  
Dieter Vaitl

Abstract The present study investigated the influence of contextual fear in comparison to relaxation on heart period variability (HPV), and analyzed differences in HPV between low and high anxious, nonclinical subjects. Fifty-three women participated in the study. Each subject underwent four experimental conditions (control, fear, relaxation, and a combined fear-relaxation condition), lasting 10 min each. Fear was provoked by an unpredictable aversive human scream. Relaxation should be induced with the aid of verbal instructions. To control for respiratory effects on HPV, breathing was paced at 0.2 Hz using an indirect light source. Besides physiological measures (HPV measures, ECG, respiration, forearm EMG, blood pressure), emotional states (pleasure, arousal, dominance, state anxiety) were assessed by subjects' self-reports. Since relaxation instructions did not have any effect neither on the subjective nor on the physiological variables, the present paper focuses on the comparison of the control and the fear condition. The scream reliably induced changes in both physiological and self-report measures. During the fear condition, subjects reported more arousal and state anxiety as well as less pleasure and dominance. Heart period decreased, while EMG and diastolic blood pressure showed a tendency to increase. HPV remained largely unaltered with the exception of the LF component, which slightly decreased under fear induction. Replicating previous findings, trait anxiety was negatively associated with HPV, but there were no treatment-specific differences between subjects with low and high trait anxiety.


2018 ◽  
Author(s):  
Douglas Samuel ◽  
John D. Ranseen

Previous studies have indicated a consistent profile of basic personality traits correlated with adult Attention Deficit Hyperactivity Disorder (ADHD) (e.g., Ranseen, Campbell, & Baer, 1998; Nigg et al., 2002). In particular, research has found that low scores of the Conscientiousness trait and high scores on Neuroticism have been correlated with ADHD symptomatology. However, to date there is limited information concerning the range of effect resulting from medication treatment for adult ADHD. During an 18 month period, 60 adults were diagnosed with ADHD based on strict, DSM-IV criteria at an outpatient clinic. This evaluation included a battery of neuropsychological tests and a measure of general personality (i.e., the NEO PI-R). Eleven of these participants returned to complete the battery a second time. The pre-post comparisons revealed significant changes following sustained stimulant treatment on both the neuropsychological and self-report measures. These individuals also displayed significant changes on two domains of the NEO PI-R. They showed a significant decrease on the domain of Neuroticism, indicating that now see themselves as less prone to experience negative emotional states such as anxiety and depression. Additionally, they also reported a significant increase on their scores on the domain of conscientiousness. This increase suggests that they see themselves as more organized and dependable.


Sign in / Sign up

Export Citation Format

Share Document