Dynamic Co-attention Network for Visual Question Answering

Author(s):  
Doaa B. Ebaid ◽  
Magda M. Madbouly ◽  
Adel A. El-Zoghabi
Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1882
Author(s):  
Cheng Yang ◽  
Weijia Wu ◽  
Yuxing Wang ◽  
Hong Zhou

Visual question answering (VQA) requires a high-level understanding of both questions and images, along with visual reasoning to predict the correct answer. Therefore, it is important to design an effective attention model to associate key regions in an image with key words in a question. Up to now, most attention-based approaches only model the relationships between individual regions in an image and words in a question. It is not enough to predict the correct answer for VQA, as human beings always think in terms of global information, not only local information. In this paper, we propose a novel multi-modality global fusion attention network (MGFAN) consisting of stacked global fusion attention (GFA) blocks, which can capture information from global perspectives. Our proposed method computes co-attention and self-attention at the same time, rather than computing them individually. We validate our proposed method on the two most commonly used benchmarks, the VQA-v2 datasets. Experimental results show that the proposed method outperforms the previous state-of-the-art. Our best single model achieves 70.67% accuracy on the test-dev set of VQA-v2.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 40771-40781 ◽  
Author(s):  
Chao Yang ◽  
Mengqi Jiang ◽  
Bin Jiang ◽  
Weixin Zhou ◽  
Keqin Li

2018 ◽  
Vol 78 (3) ◽  
pp. 3843-3858 ◽  
Author(s):  
Liang Peng ◽  
Yang Yang ◽  
Yi Bin ◽  
Ning Xie ◽  
Fumin Shen ◽  
...  

2019 ◽  
Vol 189 ◽  
pp. 102829
Author(s):  
Nelson Ruwa ◽  
Qirong Mao ◽  
Heping Song ◽  
Hongjie Jia ◽  
Ming Dong

Sign in / Sign up

Export Citation Format

Share Document