scholarly journals Automatic Emotion Perception Using Eye Movement Information for E-Healthcare Systems

Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 2826 ◽  
Author(s):  
Yang Wang ◽  
Zhao Lv ◽  
Yongjun Zheng

Facing the adolescents and detecting their emotional state is vital for promoting rehabilitation therapy within an E-Healthcare system. Focusing on a novel approach for a sensor-based E-Healthcare system, we propose an eye movement information-based emotion perception algorithm by collecting and analyzing electrooculography (EOG) signals and eye movement video synchronously. Specifically, we extract the time-frequency eye movement features by firstly applying the short-time Fourier transform (STFT) to raw multi-channel EOG signals. Subsequently, in order to integrate time domain eye movement features (i.e., saccade duration, fixation duration, and pupil diameter), we investigate two feature fusion strategies: feature level fusion (FLF) and decision level fusion (DLF). Recognition experiments have been also performed according to three emotional states: positive, neutral, and negative. The average accuracies are 88.64% (the FLF method) and 88.35% (the DLF with maximal rule method), respectively. Experimental results reveal that eye movement information can effectively reflect the emotional state of the adolescences, which provides a promising tool to improve the performance of the E-Healthcare system.

2021 ◽  
Author(s):  
Kevin Tang

In this thesis, we propose Protected Multimodal Emotion recognition (PMM-ER), an emotion recognition approach that includes security features against the growing rate of cyber-attacks on various databases, including emotion databases. The analysis on the frequently used encryption algorithms has led to the modified encryption algorithm proposed in this work. The system is able to recognize 7 different emotions, i.e. happiness, sadness, surprise, fear, disgust and anger, as well as a neutral emotion state, based on 2D video frames, 3D vertices, and audio wave information. Several well-known features are employed, including the HSV colour feature, iterative closest point (ICP) and Mel-frequency cepstral coefficients (MFCCs). We also propose a novel approach to feature fusion including both decision- and feature-level fusion, and some well-known classification and feature extraction algorithms such as principle component analysis (PCA), linear discernment analysis (LDA) and canonical correlation analysis (CCA) are compared in this study.


2021 ◽  
Author(s):  
Kevin Tang

In this thesis, we propose Protected Multimodal Emotion recognition (PMM-ER), an emotion recognition approach that includes security features against the growing rate of cyber-attacks on various databases, including emotion databases. The analysis on the frequently used encryption algorithms has led to the modified encryption algorithm proposed in this work. The system is able to recognize 7 different emotions, i.e. happiness, sadness, surprise, fear, disgust and anger, as well as a neutral emotion state, based on 2D video frames, 3D vertices, and audio wave information. Several well-known features are employed, including the HSV colour feature, iterative closest point (ICP) and Mel-frequency cepstral coefficients (MFCCs). We also propose a novel approach to feature fusion including both decision- and feature-level fusion, and some well-known classification and feature extraction algorithms such as principle component analysis (PCA), linear discernment analysis (LDA) and canonical correlation analysis (CCA) are compared in this study.


2017 ◽  
Vol 76 (2) ◽  
pp. 71-79 ◽  
Author(s):  
Hélène Maire ◽  
Renaud Brochard ◽  
Jean-Luc Kop ◽  
Vivien Dioux ◽  
Daniel Zagar

Abstract. This study measured the effect of emotional states on lexical decision task performance and investigated which underlying components (physiological, attentional orienting, executive, lexical, and/or strategic) are affected. We did this by assessing participants’ performance on a lexical decision task, which they completed before and after an emotional state induction task. The sequence effect, usually produced when participants repeat a task, was significantly smaller in participants who had received one of the three emotion inductions (happiness, sadness, embarrassment) than in control group participants (neutral induction). Using the diffusion model ( Ratcliff, 1978 ) to resolve the data into meaningful parameters that correspond to specific psychological components, we found that emotion induction only modulated the parameter reflecting the physiological and/or attentional orienting components, whereas the executive, lexical, and strategic components were not altered. These results suggest that emotional states have an impact on the low-level mechanisms underlying mental chronometric tasks.


Author(s):  
Mohammed R. Elkobaisi ◽  
Fadi Al Machot

AbstractThe use of IoT-based Emotion Recognition (ER) systems is in increasing demand in many domains such as active and assisted living (AAL), health care and industry. Combining the emotion and the context in a unified system could enhance the human support scope, but it is currently a challenging task due to the lack of a common interface that is capable to provide such a combination. In this sense, we aim at providing a novel approach based on a modeling language that can be used even by care-givers or non-experts to model human emotion w.r.t. context for human support services. The proposed modeling approach is based on Domain-Specific Modeling Language (DSML) which helps to integrate different IoT data sources in AAL environment. Consequently, it provides a conceptual support level related to the current emotional states of the observed subject. For the evaluation, we show the evaluation of the well-validated System Usability Score (SUS) to prove that the proposed modeling language achieves high performance in terms of usability and learn-ability metrics. Furthermore, we evaluate the performance at runtime of the model instantiation by measuring the execution time using well-known IoT services.


2021 ◽  
Author(s):  
Natalia Albuquerque ◽  
Daniel S. Mills ◽  
Kun Guo ◽  
Anna Wilkinson ◽  
Briseida Resende

AbstractThe ability to infer emotional states and their wider consequences requires the establishment of relationships between the emotional display and subsequent actions. These abilities, together with the use of emotional information from others in social decision making, are cognitively demanding and require inferential skills that extend beyond the immediate perception of the current behaviour of another individual. They may include predictions of the significance of the emotional states being expressed. These abilities were previously believed to be exclusive to primates. In this study, we presented adult domestic dogs with a social interaction between two unfamiliar people, which could be positive, negative or neutral. After passively witnessing the actors engaging silently with each other and with the environment, dogs were given the opportunity to approach a food resource that varied in accessibility. We found that the available emotional information was more relevant than the motivation of the actors (i.e. giving something or receiving something) in predicting the dogs’ responses. Thus, dogs were able to access implicit information from the actors’ emotional states and appropriately use the affective information to make context-dependent decisions. The findings demonstrate that a non-human animal can actively acquire information from emotional expressions, infer some form of emotional state and use this functionally to make decisions.


2021 ◽  
pp. 1-13
Author(s):  
Pullabhatla Srikanth ◽  
Chiranjib Koley

In this work, different types of power system faults at various distances have been identified using a novel approach based on Discrete S-Transform clubbed with a Fuzzy decision box. The area under the maximum values of the dilated Gaussian windows in the time-frequency domain has been used as the critical input values to the fuzzy machine. In this work, IEEE-9 and IEEE-14 bus systems have been considered as the test systems for validating the proposed methodology for identification and localization of Power System Faults. The proposed algorithm can identify different power system faults like Asymmetrical Phase Faults, Asymmetrical Ground Faults, and Symmetrical Phase faults, occurring at 20% to 80% of the transmission line. The study reveals that the variation in distance and type of fault creates a change in time-frequency magnitude in a unique pattern. The method can identify and locate the faulted bus with high accuracy in comparison to SVM.


Semiotica ◽  
2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Amitash Ojha ◽  
Charles Forceville ◽  
Bipin Indurkhya

Abstract Both mainstream and art comics often use various flourishes surrounding characters’ heads. These so-called “pictorial runes” (also called “emanata”) help convey the emotional states of the characters. In this paper, using (manipulated) panels from Western and Indian comic albums as well as neutral emoticons and basic shapes in different colors, we focus on the following two issues: (a) whether runes increase the awareness in comics readers about the emotional state of the character; and (b) whether a correspondence can be found between the types of runes (twirls, spirals, droplets, and spikes) and specific emotions. Our results show that runes help communicate emotion. Although no one-to-one correspondence was found between the tested runes and specific emotions, it was found that droplets and spikes indicate generic emotions, spirals indicate negative emotions, and twirls indicate confusion and dizziness.


Author(s):  
Haitham Issa ◽  
Sali Issa ◽  
Wahab Shah

This paper presents a new gender and age classification system based on Electroencephalography (EEG) brain signals. First, Continuous Wavelet Transform (CWT) technique is used to get the time-frequency information of only one EEG electrode for eight distinct emotional states instead of the ordinary neutral or relax states. Then, sequential steps are implemented to extract the improved grayscale image feature. For system evaluation, a three-fold-cross validation strategy is applied to construct four different classifiers. The experimental test shows that the proposed extracted feature with Convolutional Neural Network (CNN) classifier improves the performance of both gender and age classification, and achieves an average accuracy of 96.3% and 89% for gender and age classification, respectively. Moreover, the ability to predict human gender and age during the mood of different emotional states is practically approved.


Sign in / Sign up

Export Citation Format

Share Document