explanation generation
Recently Published Documents


TOTAL DOCUMENTS

47
(FIVE YEARS 18)

H-INDEX

7
(FIVE YEARS 1)

2021 ◽  
Vol 8 (04) ◽  
Author(s):  
Aniket Joshi ◽  
Jayanthi Sivaswamy ◽  
Gopal Datt Joshi

Author(s):  
Sharmi Dev Gupta ◽  
Begum Genc ◽  
Barry O'Sullivan

Much of the focus on explanation in the field of artificial intelligence has focused on machine learning methods and, in particular, concepts produced by advanced methods such as neural networks and deep learning. However, there has been a long history of explanation generation in the general field of constraint satisfaction, one of the AI's most ubiquitous subfields. In this paper we survey the major seminal papers on the explanation and constraints, as well as some more recent works. The survey sets out to unify many disparate lines of work in areas such as model-based diagnosis, constraint programming, Boolean satisfiability, truth maintenance systems, quantified logics, and related areas.


2021 ◽  
Vol 10 (3) ◽  
pp. 1-31
Author(s):  
Zhao Han ◽  
Daniel Giger ◽  
Jordan Allspaw ◽  
Michael S. Lee ◽  
Henny Admoni ◽  
...  

As autonomous robots continue to be deployed near people, robots need to be able to explain their actions. In this article, we focus on organizing and representing complex tasks in a way that makes them readily explainable. Many actions consist of sub-actions, each of which may have several sub-actions of their own, and the robot must be able to represent these complex actions before it can explain them. To generate explanations for robot behavior, we propose using Behavior Trees (BTs), which are a powerful and rich tool for robot task specification and execution. However, for BTs to be used for robot explanations, their free-form, static structure must be adapted. In this work, we add structure to previously free-form BTs by framing them as a set of semantic sets {goal, subgoals, steps, actions} and subsequently build explanation generation algorithms that answer questions seeking causal information about robot behavior. We make BTs less static with an algorithm that inserts a subgoal that satisfies all dependencies. We evaluate our BTs for robot explanation generation in two domains: a kitting task to assemble a gearbox, and a taxi simulation. Code for the behavior trees (in XML) and all the algorithms is available at github.com/uml-robotics/robot-explanation-BTs.


2021 ◽  
pp. 1-14
Author(s):  
Tatsuya Sakai ◽  
Kazuki Miyazawa ◽  
Takato Horii ◽  
Takayuki Nagai

Robotics ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 51
Author(s):  
Misbah Javaid ◽  
Vladimir Estivill-Castro

Typically, humans interact with a humanoid robot with apprehension. This lack of trust can seriously affect the effectiveness of a team of robots and humans. We can create effective interactions that generate trust by augmenting robots with an explanation capability. The explanations provide justification and transparency to the robot’s decisions. To demonstrate such effective interaction, we tested this with an interactive, game-playing environment with partial information that requires team collaboration, using a game called Spanish Domino. We partner a robot with a human to form a pair, and this team opposes a team of two humans. We performed a user study with sixty-three human participants in different settings, investigating the effect of the robot’s explanations on the humans’ trust and perception of the robot’s behaviour. Our explanation-generation mechanism produces natural-language sentences that translate the decision taken by the robot into human-understandable terms. We video-recorded all interactions to analyse factors such as the participants’ relational behaviours with the robot, and we also used questionnaires to measure the participants’ explicit trust in the robot. Overall, our main results demonstrate that explanations enhanced the participants’ understandability of the robot’s decisions, because we observed a significant increase in the participants’ level of trust in their robotic partner. These results suggest that explanations, stating the reason(s) for a decision, combined with the transparency of the decision-making process, facilitate collaborative human–humanoid interactions.


2021 ◽  
Vol 9 ◽  
pp. 790-806
Author(s):  
Matthew Lamm ◽  
Jennimaria Palomaki ◽  
Chris Alberti ◽  
Daniel Andor ◽  
Eunsol Choi ◽  
...  

A question answering system that in addition to providing an answer provides an explanation of the reasoning that leads to that answer has potential advantages in terms of debuggability, extensibility, and trust. To this end, we propose QED, a linguistically informed, extensible framework for explanations in question answering. A QED explanation specifies the relationship between a question and answer according to formal semantic notions such as referential equality, sentencehood, and entailment. We describe and publicly release an expert-annotated dataset of QED explanations built upon a subset of the Google Natural Questions dataset, and report baseline models on two tasks—post- hoc explanation generation given an answer, and joint question answering and explanation generation. In the joint setting, a promising result suggests that training on a relatively small amount of QED data can improve question answering. In addition to describing the formal, language-theoretic motivations for the QED approach, we describe a large user study showing that the presence of QED explanations significantly improves the ability of untrained raters to spot errors made by a strong neural QA baseline.


Sign in / Sign up

Export Citation Format

Share Document