scholarly journals Multimodal Approach for Emotion Recognition Using a Formal Computational Model

2013 ◽  
Vol 4 (3) ◽  
pp. 11-25 ◽  
Author(s):  
Imen Tayari Meftah ◽  
Nhan Le Thanh ◽  
Chokri Ben Amar

Emotions play a crucial role in human-computer interaction. They are generally expressed and perceived through multiple modalities such as speech, facial expressions, physiological signals. Indeed, the complexity of emotions makes the acquisition very difficult and makes unimodal systems (i.e., the observation of only one source of emotion) unreliable and often unfeasible in applications of high complexity. Moreover the lack of a standard in human emotions modeling hinders the sharing of affective information between applications. In this paper, the authors present a multimodal approach for the emotion recognition from many sources of information. This paper aims to provide a multi-modal system for emotion recognition and exchange that will facilitate inter-systems exchanges and improve the credibility of emotional interaction between users and computers. The authors elaborate a multimodal emotion recognition method from Physiological Data based on signal processing algorithms. The authors’ method permits to recognize emotion composed of several aspects like simulated and masked emotions. This method uses a new multidimensional model to represent emotional states based on an algebraic representation. The experimental results show that the proposed multimodal emotion recognition method improves the recognition rates in comparison to the unimodal approach. Compared to the state of art multimodal techniques, the proposed method gives a good results with 72% of correct.

2021 ◽  
Vol 70 ◽  
pp. 103029
Author(s):  
Ying Tan ◽  
Zhe Sun ◽  
Feng Duan ◽  
Jordi Solé-Casals ◽  
Cesar F. Caiafa

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Yifeng Zhao ◽  
Deyun Chen

Due to the complexity of human emotions, there are some similarities between different emotion features. The existing emotion recognition method has the problems of difficulty of character extraction and low accuracy, so the bidirectional LSTM and attention mechanism based on the expression EEG multimodal emotion recognition method are proposed. Firstly, facial expression features are extracted based on the bilinear convolution network (BCN), and EEG signals are transformed into three groups of frequency band image sequences, and BCN is used to fuse the image features to obtain the multimodal emotion features of expression EEG. Then, through the LSTM with the attention mechanism, important data is extracted in the process of timing modeling, which effectively avoids the randomness or blindness of sampling methods. Finally, a feature fusion network with a three-layer bidirectional LSTM structure is designed to fuse the expression and EEG features, which is helpful to improve the accuracy of emotion recognition. On the MAHNOB-HCI and DEAP datasets, the proposed method is tested based on the MATLAB simulation platform. Experimental results show that the attention mechanism can enhance the visual effect of the image, and compared with other methods, the proposed method can extract emotion features from expressions and EEG signals more effectively, and the accuracy of emotion recognition is higher.


Author(s):  
Jian Zhou ◽  
Xianwei Wei ◽  
Chunling Cheng ◽  
Qidong Yang ◽  
Qun Li

Entropy ◽  
2019 ◽  
Vol 21 (7) ◽  
pp. 646 ◽  
Author(s):  
Tomasz Sapiński ◽  
Dorota Kamińska ◽  
Adam Pelikant ◽  
Gholamreza Anbarjafari

Automatic emotion recognition has become an important trend in many artificial intelligence (AI) based applications and has been widely explored in recent years. Most research in the area of automated emotion recognition is based on facial expressions or speech signals. Although the influence of the emotional state on body movements is undeniable, this source of expression is still underestimated in automatic analysis. In this paper, we propose a novel method to recognise seven basic emotional states—namely, happy, sad, surprise, fear, anger, disgust and neutral—utilising body movement. We analyse motion capture data under seven basic emotional states recorded by professional actor/actresses using Microsoft Kinect v2 sensor. We propose a new representation of affective movements, based on sequences of body joints. The proposed algorithm creates a sequential model of affective movement based on low level features inferred from the spacial location and the orientation of joints within the tracked skeleton. In the experimental results, different deep neural networks were employed and compared to recognise the emotional state of the acquired motion sequences. The experimental results conducted in this work show the feasibility of automatic emotion recognition from sequences of body gestures, which can serve as an additional source of information in multimodal emotion recognition.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7854
Author(s):  
Luz Santamaria-Granados ◽  
Juan Francisco Mendoza-Moreno ◽  
Angela Chantre-Astaiza ◽  
Mario Munoz-Organero ◽  
Gustavo Ramirez-Gonzalez

The collection of physiological data from people has been facilitated due to the mass use of cheap wearable devices. Although the accuracy is low compared to specialized healthcare devices, these can be widely applied in other contexts. This study proposes the architecture for a tourist experiences recommender system (TERS) based on the user’s emotional states who wear these devices. The issue lies in detecting emotion from Heart Rate (HR) measurements obtained from these wearables. Unlike most state-of-the-art studies, which have elicited emotions in controlled experiments and with high-accuracy sensors, this research’s challenge consisted of emotion recognition (ER) in the daily life context of users based on the gathering of HR data. Furthermore, an objective was to generate the tourist recommendation considering the emotional state of the device wearer. The method used comprises three main phases: The first was the collection of HR measurements and labeling emotions through mobile applications. The second was emotional detection using deep learning algorithms. The final phase was the design and validation of the TERS-ER. In this way, a dataset of HR measurements labeled with emotions was obtained as results. Among the different algorithms tested for ER, the hybrid model of Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) networks had promising results. Moreover, concerning TERS, Collaborative Filtering (CF) using CNN showed better performance.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Wenqiang Tian

Promoting economic development and improving people’s quality of life have a lot to do with the continuous improvement of cloud computing technology and the rapid expansion of applications. Emotions play an important role in all aspects of human life. It is difficult to avoid the influence of inner emotions in people’s behavior and deduction. This article mainly studies the personalized emotion recognition and emotion prediction system based on cloud computing. This paper proposes a method of intelligently identifying users’ emotional states through the use of cloud computing. First, an emotional induction experiment is designed to induce the testers’ positive, neutral, and negative three basic emotional states and collect cloud data and EEG under different emotional states. Then, the cloud data is processed and analyzed to extract emotional features. After that, this paper constructs a facial emotion prediction system based on cloud computing data model, which consists of face detection and facial emotion recognition. The system uses the SVM algorithm for face detection, uses the temporal feature algorithm for facial emotion analysis, and finally uses the classification method of machine learning to classify emotions, so as to realize the purpose of identifying the user’s emotional state through cloud computing technology. Experimental data shows that the EEG signal emotion recognition method based on time domain features performs best has better generalization ability and is improved by 6.3% on the basis of traditional methods. The experimental results show that the personalized emotion recognition method based on cloud computing is more effective than traditional methods.


2021 ◽  
Vol 25 (4) ◽  
pp. 1031-1045
Author(s):  
Helang Lai ◽  
Keke Wu ◽  
Lingli Li

Emotion recognition in conversations is crucial as there is an urgent need to improve the overall experience of human-computer interactions. A promising improvement in this field is to develop a model that can effectively extract adequate contexts of a test utterance. We introduce a novel model, termed hierarchical memory networks (HMN), to address the issues of recognizing utterance level emotions. HMN divides the contexts into different aspects and employs different step lengths to represent the weights of these aspects. To model the self dependencies, HMN takes independent local memory networks to model these aspects. Further, to capture the interpersonal dependencies, HMN employs global memory networks to integrate the local outputs into global storages. Such storages can generate contextual summaries and help to find the emotional dependent utterance that is most relevant to the test utterance. With an attention-based multi-hops scheme, these storages are then merged with the test utterance using an addition operation in the iterations. Experiments on the IEMOCAP dataset show our model outperforms the compared methods with accuracy improvement.


Sign in / Sign up

Export Citation Format

Share Document