The Influence of Causal Information on Treatment Choice

2007 ◽  
Author(s):  
Jennelle E. Yopchick ◽  
Nancy S. Kim
2020 ◽  
Vol 17 (4) ◽  
pp. 405-413 ◽  
Author(s):  
Laura D. Wiedeman ◽  
Susan M. Hannan ◽  
Kelly P. Maieritsch ◽  
Cendrine Robinson ◽  
Gregory Bartoszek

2018 ◽  
Vol 0 (3) ◽  
pp. 38-42
Author(s):  
O. M. Kovalenko ◽  
A. O. Kovalenko
Keyword(s):  

Explanations are very important to us in many contexts: in science, mathematics, philosophy, and also in everyday and juridical contexts. But what is an explanation? In the philosophical study of explanation, there is long-standing, influential tradition that links explanation intimately to causation: we often explain by providing accurate information about the causes of the phenomenon to be explained. Such causal accounts have been the received view of the nature of explanation, particularly in philosophy of science, since the 1980s. However, philosophers have recently begun to break with this causal tradition by shifting their focus to kinds of explanation that do not turn on causal information. The increasing recognition of the importance of such non-causal explanations in the sciences and elsewhere raises pressing questions for philosophers of explanation. What is the nature of non-causal explanations—and which theory best captures it? How do non-causal explanations relate to causal ones? How are non-causal explanations in the sciences related to those in mathematics and metaphysics? This volume of new essays explores answers to these and other questions at the heart of contemporary philosophy of explanation. The essays address these questions from a variety of perspectives, including general accounts of non-causal and causal explanations, as well as a wide range of detailed case studies of non-causal explanations from the sciences, mathematics and metaphysics.


2021 ◽  
Vol 10 (3) ◽  
pp. 1-31
Author(s):  
Zhao Han ◽  
Daniel Giger ◽  
Jordan Allspaw ◽  
Michael S. Lee ◽  
Henny Admoni ◽  
...  

As autonomous robots continue to be deployed near people, robots need to be able to explain their actions. In this article, we focus on organizing and representing complex tasks in a way that makes them readily explainable. Many actions consist of sub-actions, each of which may have several sub-actions of their own, and the robot must be able to represent these complex actions before it can explain them. To generate explanations for robot behavior, we propose using Behavior Trees (BTs), which are a powerful and rich tool for robot task specification and execution. However, for BTs to be used for robot explanations, their free-form, static structure must be adapted. In this work, we add structure to previously free-form BTs by framing them as a set of semantic sets {goal, subgoals, steps, actions} and subsequently build explanation generation algorithms that answer questions seeking causal information about robot behavior. We make BTs less static with an algorithm that inserts a subgoal that satisfies all dependencies. We evaluate our BTs for robot explanation generation in two domains: a kitting task to assemble a gearbox, and a taxi simulation. Code for the behavior trees (in XML) and all the algorithms is available at github.com/uml-robotics/robot-explanation-BTs.


Sign in / Sign up

Export Citation Format

Share Document