scholarly journals Ask Your Neurons: A Deep Learning Approach to Visual Question Answering

2017 ◽  
Vol 125 (1-3) ◽  
pp. 110-135 ◽  
Author(s):  
Mateusz Malinowski ◽  
Marcus Rohrbach ◽  
Mario Fritz
Author(s):  
Hedi Ben-younes ◽  
Remi Cadene ◽  
Nicolas Thome ◽  
Matthieu Cord

Multimodal representation learning is gaining more and more interest within the deep learning community. While bilinear models provide an interesting framework to find subtle combination of modalities, their number of parameters grows quadratically with the input dimensions, making their practical implementation within classical deep learning pipelines challenging. In this paper, we introduce BLOCK, a new multimodal fusion based on the block-superdiagonal tensor decomposition. It leverages the notion of block-term ranks, which generalizes both concepts of rank and mode ranks for tensors, already used for multimodal fusion. It allows to define new ways for optimizing the tradeoff between the expressiveness and complexity of the fusion model, and is able to represent very fine interactions between modalities while maintaining powerful mono-modal representations. We demonstrate the practical interest of our fusion model by using BLOCK for two challenging tasks: Visual Question Answering (VQA) and Visual Relationship Detection (VRD), where we design end-to-end learnable architectures for representing relevant interactions between modalities. Through extensive experiments, we show that BLOCK compares favorably with respect to state-of-the-art multimodal fusion models for both VQA and VRD tasks. Our code is available at https://github.com/Cadene/block.bootstrap.pytorch.


2020 ◽  
Author(s):  
Widodo Budiharto ◽  
Vincent Andreas ◽  
Alexander Agung Santoso Gunawan

Abstract The development of intelligent Humanoid Robot focuses on question answering systems to be able to interact with people is very rare. In this research, we would like to propose a Humanoid Robot with the self-learning capability for accepting and giving a response from people based on Deep Learning and big data from the internet. This kind of robot can be used widely in hotels, universities and public services. The Humanoid Robot should consider the style of questions and conclude the answer through conversation between robot and user. In our scenario, the robot will detect the user’s face and accept commands from the user to do an action, where the question from the user will be processed using deep learning, and the result will be compared with knowledge on the system. We proposed our deep learning approach, based on use GRU/LSTM, CNN and BiDAF with big data SQuAD as training dataset. Our experiment indicates that using GRU/LSTM encoder with BiDAF gives higher Exact Match and F1 Score, than CNN with the BiDAF model.


Author(s):  
Somak Aditya ◽  
Yezhou Yang ◽  
Chitta Baral

Deep learning based data-driven approaches have been successfully applied in various image understanding applications ranging from object recognition, semantic segmentation to visual question answering. However, the lack of knowledge integration as well as higher-level reasoning capabilities with the methods still pose a hindrance. In this work, we present a brief survey of a few representative reasoning mechanisms, knowledge integration methods and their corresponding image understanding applications developed by various groups of researchers, approaching the problem from a variety of angles. Furthermore, we discuss upon key efforts on integrating external knowledge with neural networks. Taking cues from these efforts, we conclude by discussing potential pathways to improve reasoning capabilities.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Dhruv Sharma ◽  
Sanjay Purushotham ◽  
Chandan K. Reddy

AbstractMedical images are difficult to comprehend for a person without expertise. The scarcity of medical practitioners across the globe often face the issue of physical and mental fatigue due to the high number of cases, inducing human errors during the diagnosis. In such scenarios, having an additional opinion can be helpful in boosting the confidence of the decision maker. Thus, it becomes crucial to have a reliable visual question answering (VQA) system to provide a ‘second opinion’ on medical cases. However, most of the VQA systems that work today cater to real-world problems and are not specifically tailored for handling medical images. Moreover, the VQA system for medical images needs to consider a limited amount of training data available in this domain. In this paper, we develop MedFuseNet, an attention-based multimodal deep learning model, for VQA on medical images taking the associated challenges into account. Our MedFuseNet aims at maximizing the learning with minimal complexity by breaking the problem statement into simpler tasks and predicting the answer. We tackle two types of answer prediction—categorization and generation. We conducted an extensive set of quantitative and qualitative analyses to evaluate the performance of MedFuseNet. Our experiments demonstrate that MedFuseNet outperforms the state-of-the-art VQA methods, and that visualization of the captured attentions showcases the intepretability of our model’s predicted results.


Sign in / Sign up

Export Citation Format

Share Document