scholarly journals Multimodal Emotion Recognition from Art Using Sequential Co-Attention

2021 ◽  
Vol 7 (8) ◽  
pp. 157
Author(s):  
Tsegaye Misikir Tashu ◽  
Sakina Hajiyeva ◽  
Tomas Horvath

In this study, we present a multimodal emotion recognition architecture that uses both feature-level attention (sequential co-attention) and modality attention (weighted modality fusion) to classify emotion in art. The proposed architecture helps the model to focus on learning informative and refined representations for both feature extraction and modality fusion. The resulting system can be used to categorize artworks according to the emotions they evoke; recommend paintings that accentuate or balance a particular mood; search for paintings of a particular style or genre that represents custom content in a custom state of impact. Experimental results on the WikiArt emotion dataset showed the efficiency of the approach proposed and the usefulness of three modalities in emotion recognition.

2020 ◽  
Vol 10 (10) ◽  
pp. 687 ◽  
Author(s):  
Zhipeng He ◽  
Zina Li ◽  
Fuzhou Yang ◽  
Lei Wang ◽  
Jingcong Li ◽  
...  

With the continuous development of portable noninvasive human sensor technologies such as brain–computer interfaces (BCI), multimodal emotion recognition has attracted increasing attention in the area of affective computing. This paper primarily discusses the progress of research into multimodal emotion recognition based on BCI and reviews three types of multimodal affective BCI (aBCI): aBCI based on a combination of behavior and brain signals, aBCI based on various hybrid neurophysiology modalities and aBCI based on heterogeneous sensory stimuli. For each type of aBCI, we further review several representative multimodal aBCI systems, including their design principles, paradigms, algorithms, experimental results and corresponding advantages. Finally, we identify several important issues and research directions for multimodal emotion recognition based on BCI.


Entropy ◽  
2019 ◽  
Vol 21 (7) ◽  
pp. 646 ◽  
Author(s):  
Tomasz Sapiński ◽  
Dorota Kamińska ◽  
Adam Pelikant ◽  
Gholamreza Anbarjafari

Automatic emotion recognition has become an important trend in many artificial intelligence (AI) based applications and has been widely explored in recent years. Most research in the area of automated emotion recognition is based on facial expressions or speech signals. Although the influence of the emotional state on body movements is undeniable, this source of expression is still underestimated in automatic analysis. In this paper, we propose a novel method to recognise seven basic emotional states—namely, happy, sad, surprise, fear, anger, disgust and neutral—utilising body movement. We analyse motion capture data under seven basic emotional states recorded by professional actor/actresses using Microsoft Kinect v2 sensor. We propose a new representation of affective movements, based on sequences of body joints. The proposed algorithm creates a sequential model of affective movement based on low level features inferred from the spacial location and the orientation of joints within the tracked skeleton. In the experimental results, different deep neural networks were employed and compared to recognise the emotional state of the acquired motion sequences. The experimental results conducted in this work show the feasibility of automatic emotion recognition from sequences of body gestures, which can serve as an additional source of information in multimodal emotion recognition.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Jinlei Zhang ◽  
Xue Qiu ◽  
Xiang Li ◽  
Zhijie Huang ◽  
Mingqiu Wu ◽  
...  

Emotion recognition is a research hotspot in the field of artificial intelligence. If the human-computer interaction system can sense human emotion and express emotion, it will make the interaction between the robot and human more natural. In this paper, a multimodal emotion recognition model based on many-objective optimization algorithm is proposed for the first time. The model integrates voice information and facial information and can simultaneously optimize the accuracy and uniformity of recognition. This paper compares the emotion recognition algorithm based on many-objective algorithm optimization with the single-modal emotion recognition model proposed in this paper and the ISMS_ALA model proposed by recent related research. The experimental results show that compared with the single-mode emotion recognition, the proposed model has a great improvement in each evaluation index. At the same time, the accuracy of emotion recognition is 2.88% higher than that of the ISMS_ALA model. The experimental results show that the many-objective optimization algorithm can effectively improve the performance of the multimodal emotion recognition model.


2021 ◽  
Vol 25 (4) ◽  
pp. 1031-1045
Author(s):  
Helang Lai ◽  
Keke Wu ◽  
Lingli Li

Emotion recognition in conversations is crucial as there is an urgent need to improve the overall experience of human-computer interactions. A promising improvement in this field is to develop a model that can effectively extract adequate contexts of a test utterance. We introduce a novel model, termed hierarchical memory networks (HMN), to address the issues of recognizing utterance level emotions. HMN divides the contexts into different aspects and employs different step lengths to represent the weights of these aspects. To model the self dependencies, HMN takes independent local memory networks to model these aspects. Further, to capture the interpersonal dependencies, HMN employs global memory networks to integrate the local outputs into global storages. Such storages can generate contextual summaries and help to find the emotional dependent utterance that is most relevant to the test utterance. With an attention-based multi-hops scheme, these storages are then merged with the test utterance using an addition operation in the iterations. Experiments on the IEMOCAP dataset show our model outperforms the compared methods with accuracy improvement.


2020 ◽  
Vol 79 (37-38) ◽  
pp. 27057-27074 ◽  
Author(s):  
Qiang Gao ◽  
Chu-han Wang ◽  
Zhe Wang ◽  
Xiao-lin Song ◽  
En-zeng Dong ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document