Handling Sequences of Belief Change in a Multi-agent Context

Author(s):  
Laurent Perrussel
Keyword(s):  
Author(s):  
LAURENT PERRUSSEL ◽  
JEAN-MARC THÉVENIN

This paper focuses on the features of belief change in a multi-agent context where agents consider beliefs and disbeliefs. Disbeliefs represent explicit ignorance and are useful to prevent agents to entail conclusions due to their ignorance. Agents receive messages holding information from other agents and change their belief state accordingly. An agent may refuse to adopt incoming information if it prefers its own (dis)beliefs. For this, each agent maintains a preference relation over its own beliefs and disbeliefs in order to decide if it accepts or rejects incoming information whenever inconsistencies occur. This preference relation may be built by considering several criteria such as the reliability of the sender of statements or temporal aspects. This process leads to non-prioritized belief revision. In this context we first present the * and − operators which allow an agent to revise, respectively contract, its belief state in a non-prioritized way when it receives an incoming belief, respectively disbelief. We show that these operators behave properly. Based on this we then illustrate how the receiver and the sender may argue when the incoming (dis)belief is refused. We describe pieces of dialog where (i) the sender tries to convince the receiver by sending arguments in favor of the original (dis)belief and (ii) the receiver justifies its refusal by sending arguments against the original (dis)belief. We show that the notion of acceptability of these arguments can be represented in a simple way by using the non-prioritized change operators * and −. The advantage of argumentation dialogs is twofold. First whenever arguments are acceptable the sender or the receiver reconsider its belief state; the main result is an improvement of the reconsidered belief state. Second the sender may not be aware of some sets of rules which act as constraints to reach a specific conclusion and discover them through argumentation dialogs.


2016 ◽  
Vol 78 (3-4) ◽  
pp. 177-179 ◽  
Author(s):  
Jürgen Dix ◽  
Sven Ove Hansson ◽  
Gabriele Kern-Isberner ◽  
Guillermo R. Simari
Keyword(s):  

Author(s):  
Xiao Huang ◽  
Biqing Fang ◽  
Hai Wan ◽  
Yongmei Liu

In recent years, multi-agent epistemic planning has received attention from both dynamic logic and planning communities. Existing implementations of multi-agent epistemic planning are based on compilation into classical planning and suffer from various limitations, such as generating only linear plans, restriction to public actions, and incapability to handle disjunctive beliefs. In this paper, we propose a general representation language for multi-agent epistemic planning where the initial KB and the goal, the preconditions and effects of actions can be arbitrary multi-agent epistemic formulas, and the solution is an action tree branching on sensing results.To support efficient reasoning in the multi-agent KD45 logic, we make use of a normal form called alternative cover disjunctive formula (ACDF). We propose basic revision and update algorithms for ACDF formulas. We also handle static propositional common knowledge, which we call constraints. Based on our reasoning, revision and update algorithms, adapting the PrAO algorithm for contingent planning from the literature, we implemented a multi-agent epistemic planner called MAEP. Our experimental results show the viability of our approach.


2011 ◽  
Vol 4 (4) ◽  
pp. 536-559 ◽  
Author(s):  
BARTELD KOOI ◽  
BRYAN RENNE

We presentArrow Update Logic, a theory of epistemic access elimination that can be used to reason about multi-agent belief change. While the belief-changing “arrow updates” of Arrow Update Logic can be transformed into equivalent belief-changing “action models” from the popular Dynamic Epistemic Logic approach, we prove that arrow updates are sometimes exponentially more succinct than action models. Further, since many examples of belief change are naturally thought of from Arrow Update Logic’s perspective of eliminating access to epistemic possibilities, Arrow Update Logic is a valuable addition to the repertoire of logics of information change. In addition to proving basic results about Arrow Update Logic, we introduce a new notion of common knowledge that generalizes both ordinary common knowledge and the “relativized” common knowledge familiar from the Dynamic Epistemic Logic literature.


Author(s):  
Qiang Liu ◽  
Yongmei Liu

In the past decade, multi-agent epistemic planning has received much attention from both dynamic logic and planning communities. Common knowledge is an essential part of multi-agent modal logics, and plays an important role in coordination and interaction of multiple agents. However, existing implementations of multi-agent epistemic planning provide very limited support for common knowledge, basically static propositional common knowledge. Our work aims to extend an existing multi-agent epistemic planning framework based on higher-order belief change with the capability to deal with common knowledge. We propose a novel normal form for multi-agent KD45 logic with common knowledge. We propose satisfiability solving, revision and update algorithms for this normal form. Based on our algorithms, we implemented a multi-agent epistemic planner with common knowledge called MEPC. Our planner successfully generated solutions for several domains that demonstrate the typical usage of common knowledge.


Sign in / Sign up

Export Citation Format

Share Document