scholarly journals Cross-media residual correlation learning

2017 ◽  
Vol 4 (1) ◽  
Author(s):  
Mingkuan Yuan ◽  
Xin Huang ◽  
Yuxin Peng
2018 ◽  
Vol 77 (17) ◽  
pp. 22455-22473 ◽  
Author(s):  
Hong Zhang ◽  
Gang Dai ◽  
Du Tang ◽  
Xin Xu

Author(s):  
Jinwei Qi ◽  
Yuxin Peng ◽  
Yuxin Yuan

With the rapid growth of multimedia data, such as image and text, it is a highly challenging problem to effectively correlate and retrieve the data of different media types. Naturally, when correlating an image with textual description, people focus on not only the alignment between discriminative image regions and key words, but also the relations lying in the visual and textual context. Relation understanding is essential for cross-media correlation learning, which is ignored by prior cross-media retrieval works. To address the above issue, we propose Cross-media Relation Attention Network (CRAN) with multi-level alignment. First, we propose visual-language relation attention model to explore both fine-grained patches and their relations of different media types. We aim to not only exploit cross-media fine-grained local information, but also capture the intrinsic relation information, which can provide complementary hints for correlation learning. Second, we propose cross-media multi-level alignment to explore global, local and relation alignments across different media types, which can mutually boost to learn more precise cross-media correlation. We conduct experiments on 2 cross-media datasets, and compare with 10 state-of-the-art methods to verify the effectiveness of proposed approach.


Sign in / Sign up

Export Citation Format

Share Document