scholarly journals Generating and Evaluating Explanations of Attended and Error-Inducing Input Regions for VQA Models

Author(s):  
Arijit Ray ◽  
Michael Cogswell ◽  
Xiao Lin ◽  
Kamran Alipour ◽  
Ajay Divakaran ◽  
...  

Attention maps, a popular heatmap-based explanation method for Visual Question Answering (VQA), are supposed to help users understand the model by highlighting portions of the image/question used by the model to infer answers. However, we see that users are often misled by current attention map visualizations that point to relevant regions despite the model producing an incorrect answer. Hence, we propose Error Maps that clarify the error by highlighting image regions where the model is prone to err. Error maps can indicate when a correctly attended region may be processed incorrectly leading to an incorrect answer, and hence, improve users’ understanding of those cases. To evaluate our new explanations, we further introduce a metric that simulates users’ interpretation of explanations to evaluate their potential helpfulness to understand model correctness. We finally conduct user studies to see that our new explanations help users understand model correctness better than baselines by an expected 30% and that our proxy helpfulness metrics correlate strongly (rho>0.97) with how well users can predict model correctness.

2021 ◽  
Vol 7 ◽  
pp. e353
Author(s):  
Zhiyang Ma ◽  
Wenfeng Zheng ◽  
Xiaobing Chen ◽  
Lirong Yin

The existing joint embedding Visual Question Answering models use different combinations of image characterization, text characterization and feature fusion method, but all the existing models use static word vectors for text characterization. However, in the real language environment, the same word may represent different meanings in different contexts, and may also be used as different grammatical components. These differences cannot be effectively expressed by static word vectors, so there may be semantic and grammatical deviations. In order to solve this problem, our article constructs a joint embedding model based on dynamic word vector—none KB-Specific network (N-KBSN) model which is different from commonly used Visual Question Answering models based on static word vectors. The N-KBSN model consists of three main parts: question text and image feature extraction module, self attention and guided attention module, feature fusion and classifier module. Among them, the key parts of N-KBSN model are: image characterization based on Faster R-CNN, text characterization based on ELMo and feature enhancement based on multi-head attention mechanism. The experimental results show that the N-KBSN constructed in our experiment is better than the other 2017—winner (glove) model and 2019—winner (glove) model. The introduction of dynamic word vector improves the accuracy of the overall results.


2021 ◽  
Author(s):  
Dezhi Han ◽  
Shuli Zhou ◽  
Kuan Ching Li ◽  
Rodrigo Fernandes de Mello

2021 ◽  
Vol 1828 (1) ◽  
pp. 012145
Author(s):  
Ye Qin ◽  
Zhiping Zhou ◽  
Chen Biao ◽  
Li Wenjie

Sign in / Sign up

Export Citation Format

Share Document