multimodal sentiment analysis
Recently Published Documents


TOTAL DOCUMENTS

102
(FIVE YEARS 70)

H-INDEX

13
(FIVE YEARS 5)

2022 ◽  
Vol 467 ◽  
pp. 130-137
Author(s):  
Bo Yang ◽  
Bo Shao ◽  
Lijun Wu ◽  
Xiaola Lin

Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 74
Author(s):  
Sun Zhang ◽  
Bo Li ◽  
Chunyong Yin

The rising use of online media has changed the social customs of the public. Users have become accustomed to sharing daily experiences and publishing personal opinions on social networks. Social data carrying emotion and attitude has provided significant decision support for numerous tasks in sentiment analysis. Conventional methods for sentiment classification only concern textual modality and are vulnerable to the multimodal scenario, while common multimodal approaches only focus on the interactive relationship among modalities without considering unique intra-modal information. A hybrid fusion network is proposed in this paper to capture both inter-modal and intra-modal features. Firstly, in the stage of representation fusion, a multi-head visual attention is proposed to extract accurate semantic and sentimental information from textual contents, with the guidance of visual features. Then, multiple base classifiers are trained to learn independent and diverse discriminative information from different modal representations in the stage of decision fusion. The final decision is determined based on fusing the decision supports from base classifiers via a decision fusion method. To improve the generalization of our hybrid fusion network, a similarity loss is employed to inject decision diversity into the whole model. Empiric results on five multimodal datasets have demonstrated that the proposed model achieves higher accuracy and better generalization capacity for multimodal sentiment analysis.


2021 ◽  
Author(s):  
Wei Han ◽  
Hui Chen ◽  
Alexander Gelbukh ◽  
Amir Zadeh ◽  
Louis-philippe Morency ◽  
...  

2021 ◽  
Author(s):  
Kang Zhang ◽  
Yushui Geng ◽  
Jing Zhao ◽  
Wenxiao Li ◽  
Jianxin Liu

Information ◽  
2021 ◽  
Vol 12 (9) ◽  
pp. 342
Author(s):  
Qingfu Qi ◽  
Liyuan Lin ◽  
Rui Zhang

Multimodal sentiment analysis and emotion recognition represent a major research direction in natural language processing (NLP). With the rapid development of online media, people often express their emotions on a topic in the form of video, and the signals it transmits are multimodal, including language, visual, and audio. Therefore, the traditional unimodal sentiment analysis method is no longer applicable, which requires the establishment of a fusion model of multimodal information to obtain sentiment understanding. In previous studies, scholars used the feature vector cascade method when fusing multimodal data at each time step in the middle layer. This method puts each modal information in the same position and does not distinguish between strong modal information and weak modal information among multiple modalities. At the same time, this method does not pay attention to the embedding characteristics of multimodal signals across the time dimension. In response to the above problems, this paper proposes a new method and model for processing multimodal signals, which takes into account the delay and hysteresis characteristics of multimodal signals across the time dimension. The purpose is to obtain a multimodal fusion feature emotion analysis representation. We evaluate our method on the multimodal sentiment analysis benchmark dataset CMU Multimodal Opinion Sentiment and Emotion Intensity Corpus (CMU-MOSEI). We compare our proposed method with the state-of-the-art model and show excellent results.


Sign in / Sign up

Export Citation Format

Share Document