scholarly journals Reframing Explanation as an Interactive Medium: The EQUAS (Explainable QUestion Answering System) Project

Author(s):  
Dhruv Batra ◽  
William Ferguson ◽  
Raymond Mooney ◽  
Devi Parikh ◽  
Antonio Torralba ◽  
...  

This letter provides a retrospective analysis of our team’s research performed under the DARPA Explainable Artificial Intelligence (XAI) project. We began by exploring salience maps, English sentences, and lists of feature names for explaining the behavior of deep-learning-based discriminative systems, especially visual question answering systems. We demonstrated limited positive effects from statically presenting explanations along with system answers – for example when teaching people to identify bird species. Many XAI performers were getting better results when users interacted with explanations. This motivated us to evolve the notion of explanation as an interactive medium – usually, between humans and AI systems but sometimes within the software system. We realized that interacting via explanations could enable people to task and adapt ML agents. We added affordances for editing explanations and modified the ML system to act in accordance with the edits to produce an interpretable interface to the agent. Through this interface, editing an explanation can adapt a system’s performance to new, modified purposes. This deep tasking, wherein the agent knows its objective and the explanation for that objective will be critical to enable higher levels of autonomy.

2021 ◽  
Author(s):  
William Ferguson ◽  
Dhruv Batra ◽  
Raymond Mooney ◽  
Devi Parikh ◽  
Antonio Torralba ◽  
...  

2021 ◽  
Vol 47 (05) ◽  
Author(s):  
NGUYỄN CHÍ HIẾU

Knowledge Graphs are applied in many fields such as search engines, semantic analysis, and question answering in recent years. However, there are many obstacles for building knowledge graphs as methodologies, data and tools. This paper introduces a novel methodology to build knowledge graph from heterogeneous documents.  We use the methodologies of Natural Language Processing and deep learning to build this graph. The knowledge graph can use in Question answering systems and Information retrieval especially in Computing domain


Author(s):  
Tianyong Hao ◽  
Feifei Xu ◽  
Jingsheng Lei ◽  
Liu Wenyin ◽  
Qing Li

A strategy of automatic answer retrieval for repeated or similar questions in user-interactive systems by employing semantic question patterns is proposed in this paper. The used semantic question pattern is a generalized representation of a group of questions with both similar structure and relevant semantics. Specifically, it consists of semantic annotations (or constraints) for the variable components in the pattern and hence enhances the semantic representation and greatly reduces the ambiguity of a question instance when asked by a user using such pattern. The proposed method consists of four major steps: structure processing, similar pattern matching and filtering, automatic pattern generation, question similarity evaluation and answer retrieval. Preliminary experiments in a real question answering system show a precision of more than 90% of the method.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 94341-94356
Author(s):  
Zhen Huang ◽  
Shiyi Xu ◽  
Minghao Hu ◽  
Xinyi Wang ◽  
Jinyan Qiu ◽  
...  

2020 ◽  
Vol 38 (02) ◽  
Author(s):  
TẠ DUY CÔNG CHIẾN

Question answering systems are applied to many different fields in recent years, such as education, business, and surveys. The purpose of these systems is to answer automatically the questions or queries of users about some problems. This paper introduces a question answering system is built based on a domain specific ontology. This ontology, which contains the data and the vocabularies related to the computing domain are built from text documents of the ACM Digital Libraries. Consequently, the system only answers the problems pertaining to the information technology domains such as database, network, machine learning, etc. We use the methodologies of Natural Language Processing and domain ontology to build this system. In order to increase performance, I use a graph database to store the computing ontology and apply no-SQL database for querying data of computing ontology.


2020 ◽  
Author(s):  
Widodo Budiharto ◽  
Vincent Andreas ◽  
Alexander Agung Santoso Gunawan

Abstract The development of intelligent Humanoid Robot focuses on question answering systems to be able to interact with people is very rare. In this research, we would like to propose a Humanoid Robot with the self-learning capability for accepting and giving a response from people based on Deep Learning and big data from the internet. This kind of robot can be used widely in hotels, universities and public services. The Humanoid Robot should consider the style of questions and conclude the answer through conversation between robot and user. In our scenario, the robot will detect the user’s face and accept commands from the user to do an action, where the question from the user will be processed using deep learning, and the result will be compared with knowledge on the system. We proposed our deep learning approach, based on use GRU/LSTM, CNN and BiDAF with big data SQuAD as training dataset. Our experiment indicates that using GRU/LSTM encoder with BiDAF gives higher Exact Match and F1 Score, than CNN with the BiDAF model.


Author(s):  
Evren Dağlarli

The explainable artificial intelligence (xAI) is one of the interesting issues that has emerged recently. Many researchers are trying to deal with the subject with different dimensions and interesting results that have come out. However, we are still at the beginning of the way to understand these types of models. The forthcoming years are expected to be years in which the openness of deep learning models is discussed. In classical artificial intelligence approaches, we frequently encounter deep learning methods available today. These deep learning methods can yield highly effective results according to the data set size, data set quality, the methods used in feature extraction, the hyper parameter set used in deep learning models, the activation functions, and the optimization algorithms. However, there are important shortcomings that current deep learning models are currently inadequate. These artificial neural network-based models are black box models that generalize the data transmitted to it and learn from the data. Therefore, the relational link between input and output is not observable. This is an important open point in artificial neural networks and deep learning models. For these reasons, it is necessary to make serious efforts on the explainability and interpretability of black box models.


2020 ◽  
Vol 38 (15_suppl) ◽  
pp. 5555-5555
Author(s):  
Okyaz Eminaga ◽  
Andreas Loening ◽  
Andrew Lu ◽  
James D Brooks ◽  
Daniel Rubin

5555 Background: The variation of the human perception has limited the potential of multi-parametric magnetic resonance imaging (mpMRI) of the prostate in determining prostate cancer and identifying significant prostate cancer. The current study aims to overcome this limitation and utilizes an explainable artificial intelligence to leverage the diagnostic potential of mpMRI in detecting prostate cancer (PCa) and determining its significance. Methods: A total of 6,020 MR images from 1,498 cases were considered (1,785 T2 images, 2,719 DWI images, and 1,516 ADC maps). The treatment determined the significance of PCa. Cases who received radical prostatectomy were considered significant, whereas cases with active surveillance and followed for at least two years were considered insignificant. The negative biopsy cases have either a single biopsy setting or multiple biopsy settings with the PCa exclusion. The images were randomly divided into development (80%) and test sets (20%) after stratifying according to the case in each image type. The development set was then divided into a training set (90%) and a validation set (10%). We developed deep learning models for PCa detection and the determination of significant PCa based on the PlexusNet architecture that supports explainable deep learning and volumetric input data. The input data for PCa detection was T2-weighted images, whereas the input data for determining significant PCa include all images types. The performance of PCa detection and determination of significant PCa was measured using the area under receiving characteristic operating curve (AUROC) and compared to the maximum PiRAD score (version 2) at the case level. The 10,000 times bootstrapping resampling was applied to measure the 95% confidence interval (CI) of AUROC. Results: The AUROC for the PCa detection was 0.833 (95% CI: 0.788-0.879) compared to the PiRAD score with 0.75 (0.718-0.764). The DL models to detect significant PCa using the ADC map or DWI images achieved the highest AUROC [ADC: 0.945 (95% CI: 0.913-0.982; DWI: 0.912 (95% CI: 0.871-0.954)] compared to a DL model using T2 weighted (0.850; 95% CI: 0.791-0.908) or PiRAD scores (0.604; 95% CI: 0.544-0.663). Finally, the attention map of PlexusNet from mpMRI with PCa correctly showed areas that contain PCa after matching with corresponding prostatectomy slice. Conclusions: We found that explainable deep learning is feasible on mpMRI and achieves high accuracy in determining cases with PCa and identifying cases with significant PCa.


Sign in / Sign up

Export Citation Format

Share Document