scholarly journals Changing Minds ― Epistemic Interventions in Causal Reasoning

2021 ◽  
Author(s):  
Lara Kirfel ◽  
David Lagnado

Did Tom’s use of nuts in the dish cause Billy’s allergic reaction? According to counterfactual theories of causation, an agent is judged a cause to the extent that their action made a difference to the outcome (Gerstenberg, Goodman, Lagnado, & Tenenbaum, 2020; Gerstenberg, Halpern, & Tenenbaum, 2015; Halpern, 2016; Hitchcock & Knobe, 2009). In this paper, we argue for the integration of epistemic states into current counterfactual accounts of causation. In the case of ignorant causal agents, we demonstrate that people’s counterfactual reasoning primarily targets the agent’s epistemic state – what the agent doesn’t know –, and their epistemic actions – what they could have done to know – rather than the agent’s actual causal action. In four experiments, we show that people’s causal judgment as well as their reasoning about alternatives is sensitive to the epistemic conditions of a causal agent: Knowledge vs. ignorance (Experiment 1), self-caused vs. externally caused ignorance (Experiment 2), the number of epistemic actions (Experiment 3), and the epistemic context (Experiment 4). We see two advantages in integrating epistemic states into causal models and counterfactual frameworks. First, assuming the intervention on indirect, epistemic causes might allow us to explain why people attribute decreased causality to ignorant vs. knowing causal agents. Moreover, causal agents’ epistemic states pick out those factors that can be controlled or manipulated in order to achieve desirable future outcomes, reflecting the forward-looking dimension of causality. We discuss our findings in the broader context of moral and causal cognition.

2020 ◽  
Author(s):  
Lara Kirfel ◽  
David Lagnado

A prominent finding in causal cognition research is people's tendency to attribute increased causality to atypical actions. If two agents jointly cause an outcome ("conjunctive causation’"), but differ in how frequently they have performed the causal action before, people judge the atypically acting agent to have caused the outcome to a greater extent than the normally acting agent. In this paper, we argue that it is the epistemic state of an abnormally acting agent, rather than the abnormality of their action, that is driving people's causal judgments. Given the predictability of the normally acting agent's behaviour, the abnormal agent is in a better position to foresee the consequences of their action. We put this hypothesis to test in four experiments. In Experiment 1, we show that people judge the atypical agent as more causal than the normally acting agent, but also perceive an epistemic advantage of the abnormal agent. In Experiment 2, we find that people do not judge a causal difference if there is no epistemic asymmetry between the agents. In Experiment 3, we replicate these findings for a scenario in which the abnormal agent's epistemic advantage generalises to a novel context. In Experiment 4, we extend these findings to mental states more broadly construed. We develop a Bayesian Network model that predicts the degree of mental states based on action normality and epistemic states, and find that people infer mental states like desire and intentions to a greater extent from abnormal behaviour. We discuss these results in light of current theories and research on people’s preference for atypical causes.


2019 ◽  
Vol 30 (3) ◽  
pp. 418-430
Author(s):  
Marko-Luka Zubcic

Which epistemic value is the standard according to which we ought to compare, assess and design institutional arrangements in terms of their epistemic properties? Two main options are agent development (in terms of individual epistemic virtues or capabilities) and attainment of truth. The options are presented through two authoritative contemporary accounts-agent development by Robert Talisse?s understanding in Democracy and Moral Conflict (2009) and attainment of truth by David Estlund?s treatment, most prominently in Democratic Authority: A Philosophical Framework (2008). Both options are shown to be unsatisfactory because they are subject to problematic risk of suboptimal epistemic state lock-in. The ability of the social epistemic system to revise suboptimal epistemic states is argued to be the best option for a comparative standard in institutional epistemology.


2017 ◽  
Vol 58 ◽  
pp. 731-775
Author(s):  
Kim Bauters ◽  
Kevin McAreavey ◽  
Weiru Liu ◽  
Jun Hong ◽  
Lluís Godo ◽  
...  

The Belief-Desire-Intention (BDI) architecture is a practical approach for modelling large-scale intelligent systems. In the BDI setting, a complex system is represented as a network of interacting agents - or components - each one modelled based on its beliefs, desires and intentions. However, current BDI implementations are not well-suited for modelling more realistic intelligent systems which operate in environments pervaded by different types of uncertainty. Furthermore, existing approaches for dealing with uncertainty typically do not offer syntactical or tractable ways of reasoning about uncertainty. This complicates their integration with BDI implementations, which heavily rely on fast and reactive decisions. In this paper, we advance the state-of-the-art w.r.t. handling different types of uncertainty in BDI agents. The contributions of this paper are, first, a new way of modelling the beliefs of an agent as a set of epistemic states. Each epistemic state can use a distinct underlying uncertainty theory and revision strategy, and commensurability between epistemic states is achieved through a stratification approach. Second, we present a novel syntactic approach to revising beliefs given unreliable input. We prove that this syntactic approach agrees with the semantic definition, and we identify expressive fragments that are particularly useful for resource-bounded agents. Third, we introduce full operational semantics that extend CAN, a popular semantics for BDI, to establish how reasoning about uncertainty can be tightly integrated into the BDI framework. Fourth, we provide comprehensive experimental results to highlight the usefulness and feasibility of our approach, and explain how the generic epistemic state can be instantiated into various representations.


Author(s):  
Barbara A. Spellman ◽  
Elizabeth A. Gilbert ◽  
Elizabeth R. Tenney ◽  
Christopher R. Holland

2021 ◽  
Author(s):  
Ariel Zylberberg

From cooking a meal to finding a route to a destination, many real life decisions can be decomposed into a hierarchy of sub-decisions. In a hierarchy, choosing which decision to think about requires planning over a potentially vast space of possible decision sequences. To gain insight into how people decide what to decide on, we studied a novel task that combines perceptual decision making, active sensing and hierarchical and counterfactual reasoning. Human participants had to find a target hidden at the lowest level of a decision tree. They could solicit information from the different nodes of the decision tree to gather noisy evidence about the target's location. Feedback was given only after errors at the leaf nodes and provided ambiguous evidence about the cause of the error. Despite the complexity of task (with $10^7$ latent states) participants were able to plan efficiently in the task. A computational model of this process identified a small number of heuristics of low computational complexity that accounted for human behavior. These heuristics include making categorical decisions at the branching points of the decision tree rather than carrying forward entire probability distributions, discarding sensory evidence deemed unreliable to make a choice, and using choice confidence to infer the cause of the error after an initial plan failed. Plans based on probabilistic inference or myopic sampling norms could not capture participants' behavior. Our results show that it is possible to identify hallmarks of heuristic planning with sensing in human behavior and that the use of tasks of intermediate complexity helps identify the rules underlying human ability to reason over decision hierarchies.


Author(s):  
David Danks

Causal beliefs and reasoning are deeply embedded in many parts of our cognition. We are clearly ‘causal cognizers’, as we easily and automatically (try to) learn the causal structure of the world, use causal knowledge to make decisions and predictions, generate explanations using our beliefs about the causal structure of the world, and use causal knowledge in many other ways. Because causal cognition is so ubiquitous, psychological research into it is itself an enormous topic, and literally hundreds of people have devoted entire careers to the study of it. Causal cognition can be divided into two rough categories: causal learning and causal reasoning. The former encompasses the processes by which we learn about causal relations in the world at both the type and token levels; the latter refers to the ways in which we use those causal beliefs to make further inferences, decisions, predictions, and so on.


2019 ◽  
pp. 62-83
Author(s):  
Anna-Maria A. Eder ◽  
Peter Brössel

In everyday life and in science we acquire evidence of evidence and based on this new evidence we often change our epistemic states. An assumption underlying such practice is that the following EEE Slogan is correct: ‘evidence of evidence is evidence’. We suggest that evidence of evidence is best understood as higher-order evidence about the epistemic state of agents. In order to model evidence of evidence the chapter introduces a new powerful framework for modelling epistemic states, Dyadic Bayesianism. Based on this framework, it then discusses characterizations of evidence of evidence and argues for one of them. Finally, the chapter shows that whether the EEE Slogan holds, depends on the specific kind of evidence of evidence.


Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 843
Author(s):  
Peter Gärdenfors

The aim of this article is to provide an evolutionarily grounded explanation of central aspects of the structure of language. It begins with an account of the evolution of human causal reasoning. A comparison between humans and non-human primates suggests that human causal cognition is based on reasoning about the underlying forces that are involved in events, while other primates hardly understand external forces. This is illustrated by an analysis of the causal cognition required for early hominin tool use. Second, the thinking concerning forces in causation is used to motivate a model of human event cognition. A mental representation of an event contains two vectors representing a cause as well as a result but also entities such as agents, patients, instruments and locations. The fundamental connection between event representations and language is that declarative sentences express events (or states). The event structure also explains why sentences are constituted of noun phrases and verb phrases. Finally, the components of the event representation show up in language, where causes and effects are expressed by verbs, agents and patients by nouns (modified by adjectives), locations by prepositions, etc. Thus, the evolution of the complexity of mental event representations also provides insight into the evolution of the structure of language.


2019 ◽  
Author(s):  
Tobias Gerstenberg ◽  
Thomas Icard

When several causes contributed to an outcome, people often single out one as "the" cause. What explains this selection? Previous work has argued that people select abnormal events as causes, though recent work has shown that sometimes normal events are preferred over abnormal ones. Existing studies have relied on vignettes that commonly feature agents committing immoral acts. An important challenge to the thesis that norms permeate causal reasoning is that people's responses may merely reflect pragmatic or social reasoning rather than arising from causal cognition per se. We tested this hypothesis by asking whether the previously observed patterns of causal selection emerge in tasks that recruit participants' causal reasoning about physical systems. Strikingly, we found that the same patterns observed in vignette studies with intentional agents arise in visual animations of physical interactions. Our results demonstrate how deeply normative expectations affect causal cognition.


2021 ◽  
Vol 14 ◽  
pp. 329-341
Author(s):  
Fernando Tohmé ◽  
◽  
Gianluca Caterina ◽  
Rocco Gangle ◽  
◽  
...  

We present here a novel approach to the analysis of common knowledge based on category theory. In particular, we model the global epistemic state for a given set of agents through a hierarchy of beliefs represented by a presheaf construction. Then, by employing the properties of a categorical monad, we prove the existence of a state, obtained in an iterative fashion, in which all agents acquire common knowledge of some underlying statement. In order to guarantee the existence of a fixed point under certain suitable conditions, we make use of the properties entailed by Sergeyev's numeral system called grossone, which allows a finer control on the relevant structure of the infinitely nested epistemic states.


Sign in / Sign up

Export Citation Format

Share Document