Application of Chaos Characteristics about Physiological Signals in Emotion Recognition Based on Approximate Entropy

2014 ◽  
Vol 543-547 ◽  
pp. 2539-2542
Author(s):  
Chun Yan Nie ◽  
Hai Xin Sun ◽  
Ju Wang

Emotion recognition is an important part in affective computing. It is the basis of building a harmonious man-machine environment. Respiratory (RSP) signal and electrocardiogram (ECG) signal are one of the main study objects in the emotion recognition based on physiological signal. The variations of the RSP signal and the ECG signal is one of the true performances of the human emotions. Through the analyses of the RSP signal and the ECG signal, we can recognize the inner emotion variations of human beings. This lays the foundation for the system modeling of emotion recognition. In this paper, we study the approximate entropy extraction of the physiological signals and analyze the chaotic characteristics and frequency domain characteristics of the approximate entropy under different emotions. The study results show that the different emotion status is corresponding to different approximate entropy and different variations in the frequency domain.

2010 ◽  
Vol 143-144 ◽  
pp. 677-681
Author(s):  
Hai Ning Wang ◽  
Shou Qian Sun ◽  
Ting Shu ◽  
Jian Feng Wu

The ability to understand human emotions is desirable for the computer in many applications recently. Recording and recognizing physiological signals of emotion has become an increasingly important field of research in affective computing and human computer interaction. For the problem of feature redundancy of physiological signals-based emotion recognition and low efficiency of traditional feature reduction algorithms on great sample data, this paper proposed an improved adaptive genetic algorithm (IAGA) to solve the problem of emotion feature selection, and then presented a weighted kNN classifier (wkNN) to classify features by making full use of emotion sample information. We demonstrated a case study of emotion recognition application and verified the algorithm's validity by the analysis of experimental simulation data and the comparison of several recognition methods.


Emotion recognition is alluring considerable interest among the researchers. Emotions are discovered by facial, speech, gesture, posture and physiological signals. Physiological signals are a plausible mechanism to recognize emotion using human-computer interaction. The objective of this paper is to put forth the recognition of emotions using physiological signals. Various emotion elicitation protocols, feature extraction techniques, classification methods that aim at recognizing emotions from physiological signals are discussed here. Wrist Pulse Signal is also discussed to fill the lacunae of the other physiological signal for emotion detection. Working on basic as well as non-basic human emotion and human-computer interface will make the system robust.


2013 ◽  
Vol 380-384 ◽  
pp. 3750-3753 ◽  
Author(s):  
Chun Yan Nie ◽  
Rui Li ◽  
Ju Wang

Changes of physiological signals are affected by human emotions, but also the emotional fluctuations are reflected by the body's variation of physiological signal's feature. Physiological signal is a non-linear signal ,nonlinear dynamics and biomedical engineering ,which based on chaos theory, providing us a new method for studying on the parameters of these complex physiological signals which can hardly described by the classical theory. This paper shows physiological emotion signal recognition system based on the chaotic characteristics, and than describes some current applications of chaotic characteristics for multiple physiological signals on emotional recognition.


Author(s):  
Tahirou Djara ◽  
Abdoul Matine Ousmane ◽  
Antoine Vianou

Emotion recognition is an important aspect of affective computing, one of whose aims is the study and development of behavioral and emotional interaction between human and machine. In this context, another important point concerns acquisition devices and signal processing tools which lead to an estimation of the emotional state of the user. This article presents a survey about concepts around emotion, multimodality in recognition, physiological activities and emotional induction, methods and tools for acquisition and signal processing with a focus on processing algorithm and their degree of reliability.


Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 4037
Author(s):  
Aasim Raheel ◽  
Muhammad Majid ◽  
Majdi Alnowami ◽  
Syed Muhammad Anwar

Emotion recognition has increased the potential of affective computing by getting an instant feedback from users and thereby, have a better understanding of their behavior. Physiological sensors have been used to recognize human emotions in response to audio and video content that engages single (auditory) and multiple (two: auditory and vision) human senses, respectively. In this study, human emotions were recognized using physiological signals observed in response to tactile enhanced multimedia content that engages three (tactile, vision, and auditory) human senses. The aim was to give users an enhanced real-world sensation while engaging with multimedia content. To this end, four videos were selected and synchronized with an electric fan and a heater, based on timestamps within the scenes, to generate tactile enhanced content with cold and hot air effect respectively. Physiological signals, i.e., electroencephalography (EEG), photoplethysmography (PPG), and galvanic skin response (GSR) were recorded using commercially available sensors, while experiencing these tactile enhanced videos. The precision of the acquired physiological signals (including EEG, PPG, and GSR) is enhanced using pre-processing with a Savitzky-Golay smoothing filter. Frequency domain features (rational asymmetry, differential asymmetry, and correlation) from EEG, time domain features (variance, entropy, kurtosis, and skewness) from GSR, heart rate and heart rate variability from PPG data are extracted. The K nearest neighbor classifier is applied to the extracted features to classify four (happy, relaxed, angry, and sad) emotions. Our experimental results show that among individual modalities, PPG-based features gives the highest accuracy of 78.57 % as compared to EEG- and GSR-based features. The fusion of EEG, GSR, and PPG features further improved the classification accuracy to 79.76 % (for four emotions) when interacting with tactile enhanced multimedia.


2020 ◽  
pp. 1946-1967
Author(s):  
Tahirou Djara ◽  
Abdoul Matine Ousmane ◽  
Antoine Vianou

Emotion recognition is an important aspect of affective computing, one of whose aims is the study and development of behavioral and emotional interaction between human and machine. In this context, another important point concerns acquisition devices and signal processing tools which lead to an estimation of the emotional state of the user. This article presents a survey about concepts around emotion, multimodality in recognition, physiological activities and emotional induction, methods and tools for acquisition and signal processing with a focus on processing algorithm and their degree of reliability.


Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 866 ◽  
Author(s):  
SeungJun Oh ◽  
Jun-Young Lee ◽  
Dong Keun Kim

This study aimed to design an optimal emotion recognition method using multiple physiological signal parameters acquired by bio-signal sensors for improving the accuracy of classifying individual emotional responses. Multiple physiological signals such as respiration (RSP) and heart rate variability (HRV) were acquired in an experiment from 53 participants when six basic emotion states were induced. Two RSP parameters were acquired from a chest-band respiration sensor, and five HRV parameters were acquired from a finger-clip blood volume pulse (BVP) sensor. A newly designed deep-learning model based on a convolutional neural network (CNN) was adopted for detecting the identification accuracy of individual emotions. Additionally, the signal combination of the acquired parameters was proposed to obtain high classification accuracy. Furthermore, a dominant factor influencing the accuracy was found by comparing the relativeness of the parameters, providing a basis for supporting the results of emotion classification. The users of this proposed model will soon be able to improve the emotion recognition model further based on CNN using multimodal physiological signals and their sensors.


Author(s):  
M. Callejas-Cuervo ◽  
L.A. Martínez-Tejada ◽  
A.C. Alarcón-Aldana

This paper presents a system that allows for the identification of two values: arousal and valence, which represent the degree of stimulation in a subject, using Russell’s model of affect as a reference. To identify emotions, a step-by-step structure is used, which, based on statistical data from physiological signal metrics, generates the representative arousal value (direct correlation); from the PANAS questionnaire, the system generates the valence value (inverse correlation), as a first approximation to the techniques of emotion recognition without the use of artificial intelligence. The system gathers information concerning arousal activity from a subject using the following metrics: beats per minute (BPM), heart rate variability (HRV), the number of galvanic skin response (GSR) peaks in the skin conductance response (SCR) and forearm contraction time, using three physiological signals (Electrocardiogram - ECG, Galvanic Skin Response - GSR, Electromyography - EMG).


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5328
Author(s):  
Clarence Tan ◽  
Gerardo Ceballos ◽  
Nikola Kasabov ◽  
Narayan Puthanmadam Subramaniyam

Using multimodal signals to solve the problem of emotion recognition is one of the emerging trends in affective computing. Several studies have utilized state of the art deep learning methods and combined physiological signals, such as the electrocardiogram (EEG), electroencephalogram (ECG), skin temperature, along with facial expressions, voice, posture to name a few, in order to classify emotions. Spiking neural networks (SNNs) represent the third generation of neural networks and employ biologically plausible models of neurons. SNNs have been shown to handle Spatio-temporal data, which is essentially the nature of the data encountered in emotion recognition problem, in an efficient manner. In this work, for the first time, we propose the application of SNNs in order to solve the emotion recognition problem with the multimodal dataset. Specifically, we use the NeuCube framework, which employs an evolving SNN architecture to classify emotional valence and evaluate the performance of our approach on the MAHNOB-HCI dataset. The multimodal data used in our work consists of facial expressions along with physiological signals such as ECG, skin temperature, skin conductance, respiration signal, mouth length, and pupil size. We perform classification under the Leave-One-Subject-Out (LOSO) cross-validation mode. Our results show that the proposed approach achieves an accuracy of 73.15% for classifying binary valence when applying feature-level fusion, which is comparable to other deep learning methods. We achieve this accuracy even without using EEG, which other deep learning methods have relied on to achieve this level of accuracy. In conclusion, we have demonstrated that the SNN can be successfully used for solving the emotion recognition problem with multimodal data and also provide directions for future research utilizing SNN for Affective computing. In addition to the good accuracy, the SNN recognition system is requires incrementally trainable on new data in an adaptive way. It only one pass training, which makes it suitable for practical and on-line applications. These features are not manifested in other methods for this problem.


2018 ◽  
Vol 2018 ◽  
pp. 1-9 ◽  
Author(s):  
Wei Wei ◽  
Qingxuan Jia ◽  
Yongli Feng ◽  
Gang Chen

Emotion recognition is an important pattern recognition problem that has inspired researchers for several areas. Various data from humans for emotion recognition have been developed, including visual, audio, and physiological signals data. This paper proposes a decision-level weight fusion strategy for emotion recognition in multichannel physiological signals. Firstly, we selected four kinds of physiological signals, including Electroencephalography (EEG), Electrocardiogram (ECG), Respiration Amplitude (RA), and Galvanic Skin Response (GSR). And various analysis domains have been used in physiological emotion features extraction. Secondly, we adopt feedback strategy for weight definition, according to recognition rate of each emotion of each physiological signal based on Support Vector Machine (SVM) classifier independently. Finally, we introduce weight in decision level by linear fusing weight matrix with classification result of each SVM classifier. The experiments on the MAHNOB-HCI database show the highest accuracy. The results also provide evidence and suggest a way for further developing a more specialized emotion recognition system based on multichannel data using weight fusion strategy.


Sign in / Sign up

Export Citation Format

Share Document