Changing Beliefs about Domain Dynamics in the Situation Calculus

Author(s):  
Toryn Q. Klassen ◽  
Sheila A. McIlraith ◽  
Hector J. Levesque

Agents change their beliefs about the plausibility of various aspects of domain dynamics -- effects of physical actions, results of sensing, and action preconditions -- as a consequence of their interactions with the world. In this paper we propose a way to conveniently represent domain dynamics in the situation calculus to support such belief change. Furthermore, we suggest patterns to follow when writing the axioms that describe the effects of actions, and prove how these patterns can control the extent to which observations change the agent's beliefs about action effects. We also discuss the relation of our work to the AGM postulates for belief revision. Finally, we show how beliefs about domain dynamics can be incorporated into a form of regression rewriting to support reasoning.

2015 ◽  
Vol 53 ◽  
pp. 779-824 ◽  
Author(s):  
Aaron Hunter ◽  
James Delgrande

We consider the iterated belief change that occurs following an alternating sequence of actions and observations. At each instant, an agent has beliefs about the actions that have occurred as well as beliefs about the resulting state of the world. We represent such problems by a sequence of ranking functions, so an agent assigns a quantitative plausibility value to every action and every state at each point in time. The resulting formalism is able to represent fallible belief, erroneous perception, exogenous actions, and failed actions. We illustrate that our framework is a generalization of several existing approaches to belief change, and it appropriately captures the non-elementary interaction between belief update and belief revision.


Author(s):  
LAURENT PERRUSSEL ◽  
JEAN-MARC THÉVENIN

This paper focuses on the features of belief change in a multi-agent context where agents consider beliefs and disbeliefs. Disbeliefs represent explicit ignorance and are useful to prevent agents to entail conclusions due to their ignorance. Agents receive messages holding information from other agents and change their belief state accordingly. An agent may refuse to adopt incoming information if it prefers its own (dis)beliefs. For this, each agent maintains a preference relation over its own beliefs and disbeliefs in order to decide if it accepts or rejects incoming information whenever inconsistencies occur. This preference relation may be built by considering several criteria such as the reliability of the sender of statements or temporal aspects. This process leads to non-prioritized belief revision. In this context we first present the * and − operators which allow an agent to revise, respectively contract, its belief state in a non-prioritized way when it receives an incoming belief, respectively disbelief. We show that these operators behave properly. Based on this we then illustrate how the receiver and the sender may argue when the incoming (dis)belief is refused. We describe pieces of dialog where (i) the sender tries to convince the receiver by sending arguments in favor of the original (dis)belief and (ii) the receiver justifies its refusal by sending arguments against the original (dis)belief. We show that the notion of acceptability of these arguments can be represented in a simple way by using the non-prioritized change operators * and −. The advantage of argumentation dialogs is twofold. First whenever arguments are acceptable the sender or the receiver reconsider its belief state; the main result is an improvement of the reconsidered belief state. Second the sender may not be aware of some sets of rules which act as constraints to reach a specific conclusion and discover them through argumentation dialogs.


1999 ◽  
Vol 10 ◽  
pp. 117-167 ◽  
Author(s):  
N. Friedman ◽  
J. Y. Halpern

The study of belief change has been an active area in philosophy and AI. In recent years two special cases of belief change, belief revision and belief update, have been studied in detail. In a companion paper (Friedman & Halpern, 1997), we introduce a new framework to model belief change. This framework combines temporal and epistemic modalities with a notion of plausibility, allowing us to examine the change of beliefs over time. In this paper, we show how belief revision and belief update can be captured in our framework. This allows us to compare the assumptions made by each method, and to better understand the principles underlying them. In particular, it shows that Katsuno and Mendelzon's notion of belief update (Katsuno & Mendelzon, 1991a) depends on several strong assumptions that may limit its applicability in artificial intelligence. Finally, our analysis allow us to identify a notion of minimal change that underlies a broad range of belief change operations including revision and update.


2003 ◽  
Vol 19 ◽  
pp. 279-314 ◽  
Author(s):  
F. Lin

We describe a system for specifying the effects of actions. Unlike those commonly used in AI planning, our system uses an action description language that allows one to specify the effects of actions using domain rules, which are state constraints that can entail new action effects from old ones. Declaratively, an action domain in our language corresponds to a nonmonotonic causal theory in the situation calculus. Procedurally, such an action domain is compiled into a set of logical theories, one for each action in the domain, from which fully instantiated successor state-like axioms and STRIPS-like systems are then generated. We expect the system to be a useful tool for knowledge engineers writing action specifications for classical AI planning systems, GOLOG systems, and other systems where formal specifications of actions are needed.


2020 ◽  
Vol 30 (7) ◽  
pp. 1357-1376
Author(s):  
Theofanis Aravanis

Abstract Rational belief-change policies are encoded in the so-called AGM revision functions, defined in the prominent work of Alchourrón, Gärdenfors and Makinson. The present article studies an interesting class of well-behaved AGM revision functions, called herein uniform-revision operators (or UR operators, for short). Each UR operator is uniquely defined by means of a single total preorder over all possible worlds, a fact that in turn entails a significantly lower representational cost, relative to an arbitrary AGM revision function, and an embedded solution to the iterated-revision problem, at no extra representational cost. Herein, we first demonstrate how weaker, more expressive—yet, more representationally expensive—types of uniform revision can be defined. Furthermore, we prove that UR operators, essentially, generalize a significant type of belief change, namely, parametrized-difference revision. Lastly, we show that they are (to some extent) relevance-sensitive, as well as that they respect the so-called principle of kinetic consistency.


2017 ◽  
Vol 251 ◽  
pp. 62-97
Author(s):  
Christoph Schwering ◽  
Gerhard Lakemeyer ◽  
Maurice Pagnucco

2016 ◽  
Vol 20 (4) ◽  
pp. 399-411 ◽  
Author(s):  
Matthew Tyler Boden ◽  
Howard Berenbaum ◽  
James J. Gross

Why do people believe what they do? Scholars and laypeople alike tend to answer this question by focusing on the representational functions of beliefs (i.e., representing the world accurately). However, a growing body of theory and research indicates that beliefs also can serve important hedonic functions (i.e., decreasing/increasing negative or positive emotional states). In this article, we describe: (a) the features of belief; (b) the functions served by beliefs, with a focus on the hedonic function; (c) an integrative framework highlighting the hedonic function and contrasting it with the representational function; and (d) the implications of our framework, and related future research directions for individual differences in belief, belief change, and the ways in which beliefs contribute to adaptive versus maladaptive psychological functioning.


Studia Logica ◽  
2021 ◽  
Author(s):  
Sena Bozdag

AbstractI propose a novel hyperintensional semantics for belief revision and a corresponding system of dynamic doxastic logic. The main goal of the framework is to reduce some of the idealisations that are common in the belief revision literature and in dynamic epistemic logic. The models of the new framework are primarily based on potentially incomplete or inconsistent collections of information, represented by situations in a situation space. I propose that by shifting the representational focus of doxastic models from belief sets to collections of information, and by defining changes of beliefs as artifacts of changes of information, we can achieve a more realistic account of belief representation and belief change. The proposed dynamic operation suggests a non-classical way of changing beliefs: belief revision occurs in non-explosive environments which allow for a non-monotonic and hyperintensional belief dynamics. A logic that is sound with respect to the semantics is also provided.


Sign in / Sign up

Export Citation Format

Share Document