(DIS)BELIEF CHANGE AND ARGUED FEED-BACK DIALOG

Author(s):  
LAURENT PERRUSSEL ◽  
JEAN-MARC THÉVENIN

This paper focuses on the features of belief change in a multi-agent context where agents consider beliefs and disbeliefs. Disbeliefs represent explicit ignorance and are useful to prevent agents to entail conclusions due to their ignorance. Agents receive messages holding information from other agents and change their belief state accordingly. An agent may refuse to adopt incoming information if it prefers its own (dis)beliefs. For this, each agent maintains a preference relation over its own beliefs and disbeliefs in order to decide if it accepts or rejects incoming information whenever inconsistencies occur. This preference relation may be built by considering several criteria such as the reliability of the sender of statements or temporal aspects. This process leads to non-prioritized belief revision. In this context we first present the * and − operators which allow an agent to revise, respectively contract, its belief state in a non-prioritized way when it receives an incoming belief, respectively disbelief. We show that these operators behave properly. Based on this we then illustrate how the receiver and the sender may argue when the incoming (dis)belief is refused. We describe pieces of dialog where (i) the sender tries to convince the receiver by sending arguments in favor of the original (dis)belief and (ii) the receiver justifies its refusal by sending arguments against the original (dis)belief. We show that the notion of acceptability of these arguments can be represented in a simple way by using the non-prioritized change operators * and −. The advantage of argumentation dialogs is twofold. First whenever arguments are acceptable the sender or the receiver reconsider its belief state; the main result is an improvement of the reconsidered belief state. Second the sender may not be aware of some sets of rules which act as constraints to reach a specific conclusion and discover them through argumentation dialogs.

Author(s):  
Marlo Souza ◽  
Álvaro Moreira ◽  
Renata Vieira

AGM’s belief revision is one of the main paradigms in the study of belief change operations. In this context, belief bases (prioritised bases) have been largely used to specify the agent’s belief state - whether representing the agent’s ‘explicit beliefs’ or as a computational model for her belief state. While the connection of iterated AGM-like operations and their encoding in dynamic epistemic logics have been studied before, few works considered how well-known postulates from iterated belief revision theory can be characterised by means of belief bases and their counterpart in dynamic epistemic logic. This work investigates how priority graphs, a syntactic representation of preference relations deeply connected to prioritised bases, can be used to characterise belief change operators, focusing on well-known postulates of Iterated Belief Change. We provide syntactic representations of belief change operators in a dynamic context, as well as new negative results regarding the possibility of representing an iterated belief revision operation using transformations on priority graphs.


2020 ◽  
Author(s):  
Elise Perrotin ◽  
Fernando R Velázquez-Quesada

Abstract Belief revision is concerned with belief change fired by incoming information. Despite the variety of frameworks representing it, most revision policies share one crucial feature: incoming information outweighs current information and hence, in case of conflict, incoming information will prevail. However, if one is interested in representing the way actual humans revise their beliefs, one might not always want for the agent to blindly believe everything they are told. This manuscript presents a semantic approach to non-prioritized belief revision. It uses plausibility models for depicting an agent’s beliefs, and model operations for displaying the way beliefs change. The first proposal, semantically-based screened revision, compares the current model with the one the revision would yield, accepting or rejecting the incoming information depending on whether the ‘differences’ between these models go beyond a given threshold. The second proposal, semantically-based gradual revision, turns the binary decision of acceptance or rejection into a more general setting in which a revision always occurs, with the threshold used rather to choose ‘the right revision’ for the given input and model.


2010 ◽  
Vol 3 (2) ◽  
pp. 228-246 ◽  
Author(s):  
KRISTER SEGERBERG

The success of the AGM paradigm—the theory of belief change initiated by Alchourrón, Gärdenfors, and Makinson—is remarkable, as even a quick look at the literature it has generated will testify. But it is also remarkable, at least in hindsight, how limited was the original effort. For example, the theory concerns the beliefs of just one agent; all incoming information is accepted; belief change is uniquely determined by the new information; there is no provision for nested beliefs. And perhaps most surprising: there is no analysis of iterated change.In this paper it is that last restriction that is at issue. Our medium of study is dynamic doxastic logic (DDL). The success of the AGM paradigm The particular contribution of the paper is detailed completeness proofs for three dynamic doxastic logics of iterated belief revision.The problem of extending the AGM paradigm to include iterated change has been discussed for years, but systematic discussions have appeared only recently (see Segerberg, 2007 and forthcoming, but also van Benthem, 2007; Rott, 2006; Zvesper, 2007).


1999 ◽  
Vol 10 ◽  
pp. 117-167 ◽  
Author(s):  
N. Friedman ◽  
J. Y. Halpern

The study of belief change has been an active area in philosophy and AI. In recent years two special cases of belief change, belief revision and belief update, have been studied in detail. In a companion paper (Friedman & Halpern, 1997), we introduce a new framework to model belief change. This framework combines temporal and epistemic modalities with a notion of plausibility, allowing us to examine the change of beliefs over time. In this paper, we show how belief revision and belief update can be captured in our framework. This allows us to compare the assumptions made by each method, and to better understand the principles underlying them. In particular, it shows that Katsuno and Mendelzon's notion of belief update (Katsuno & Mendelzon, 1991a) depends on several strong assumptions that may limit its applicability in artificial intelligence. Finally, our analysis allow us to identify a notion of minimal change that underlies a broad range of belief change operations including revision and update.


2016 ◽  
Vol 78 (3-4) ◽  
pp. 177-179 ◽  
Author(s):  
Jürgen Dix ◽  
Sven Ove Hansson ◽  
Gabriele Kern-Isberner ◽  
Guillermo R. Simari
Keyword(s):  

Author(s):  
Xiao Huang ◽  
Biqing Fang ◽  
Hai Wan ◽  
Yongmei Liu

In recent years, multi-agent epistemic planning has received attention from both dynamic logic and planning communities. Existing implementations of multi-agent epistemic planning are based on compilation into classical planning and suffer from various limitations, such as generating only linear plans, restriction to public actions, and incapability to handle disjunctive beliefs. In this paper, we propose a general representation language for multi-agent epistemic planning where the initial KB and the goal, the preconditions and effects of actions can be arbitrary multi-agent epistemic formulas, and the solution is an action tree branching on sensing results.To support efficient reasoning in the multi-agent KD45 logic, we make use of a normal form called alternative cover disjunctive formula (ACDF). We propose basic revision and update algorithms for ACDF formulas. We also handle static propositional common knowledge, which we call constraints. Based on our reasoning, revision and update algorithms, adapting the PrAO algorithm for contingent planning from the literature, we implemented a multi-agent epistemic planner called MAEP. Our experimental results show the viability of our approach.


Sign in / Sign up

Export Citation Format

Share Document