Argument evaluation in multi-agent justification logics

2019 ◽  
Author(s):  
Alfredo Burrieza ◽  
Antonio Yuste-Ginel

Abstract Argument evaluation , one of the central problems in argumentation theory, consists in studying what makes an argument a good one. This paper proposes a formal approach to argument evaluation from the perspective of justification logic. We adopt a multi-agent setting, accepting the intuitive idea that arguments are always evaluated by someone. Two general restrictions are imposed on our analysis: non-deductive arguments are left out and the goal of argument evaluation is fixed: supporting a given proposition. Methodologically, our approach uses several existing tools borrowed from justification logic, awareness logic, doxastic logic and logics for belief dependence. We start by introducing a basic logic for argument evaluation, where a list of argumentative and doxastic notions can be expressed. Later on, we discuss how to capture the mentioned form of argument evaluation by defining a preference operator in the object language. The intuitive picture behind this definition is that, when assessing a couple of arguments, the agent puts them to a test consisting of several criteria (filters). As a result of this process, a preference relation among the evaluated arguments is established by the agent. After showing that this operator suffers a special form of logical omniscience, called preferential omniscience, we discuss how to define an explicit version of it, more suitable to deal with non-ideal agents. The present work exploits the formal notion of awareness in order to model several informal phenomena: awareness of sentences, availability of arguments and communication between agents and external sources (advisers). We discuss several extensions of the basic logic and offer completeness and decidability results for all of them.

Author(s):  
LAURENT PERRUSSEL ◽  
JEAN-MARC THÉVENIN

This paper focuses on the features of belief change in a multi-agent context where agents consider beliefs and disbeliefs. Disbeliefs represent explicit ignorance and are useful to prevent agents to entail conclusions due to their ignorance. Agents receive messages holding information from other agents and change their belief state accordingly. An agent may refuse to adopt incoming information if it prefers its own (dis)beliefs. For this, each agent maintains a preference relation over its own beliefs and disbeliefs in order to decide if it accepts or rejects incoming information whenever inconsistencies occur. This preference relation may be built by considering several criteria such as the reliability of the sender of statements or temporal aspects. This process leads to non-prioritized belief revision. In this context we first present the * and − operators which allow an agent to revise, respectively contract, its belief state in a non-prioritized way when it receives an incoming belief, respectively disbelief. We show that these operators behave properly. Based on this we then illustrate how the receiver and the sender may argue when the incoming (dis)belief is refused. We describe pieces of dialog where (i) the sender tries to convince the receiver by sending arguments in favor of the original (dis)belief and (ii) the receiver justifies its refusal by sending arguments against the original (dis)belief. We show that the notion of acceptability of these arguments can be represented in a simple way by using the non-prioritized change operators * and −. The advantage of argumentation dialogs is twofold. First whenever arguments are acceptable the sender or the receiver reconsider its belief state; the main result is an improvement of the reconsidered belief state. Second the sender may not be aware of some sets of rules which act as constraints to reach a specific conclusion and discover them through argumentation dialogs.


1999 ◽  
Vol 7 (1) ◽  
pp. 177-203 ◽  
Author(s):  
Douglas N. Walton

In this paper, it is shown how formal dialectic can be extended to model multi-agent argumentation in which each participant is an agent. An agent is viewed as a participant in a dialogue who not only has goals, and the capability for actions, but who also has stable characteristics of types that can be relevant to an assessment of some of her arguments used in that dialogue. When agents engage in argumentation in dialogues, each agent has a credibility function that can be adjusted upwards or downwards by certain types of arguments brought forward by the other agent in the dialogue. One type is the argument against the person or argumentum ad hominem, in which personal attack on one party's character is used to attack his argument. Another is the appeal to expert opinion, traditionally associated with the informal fallacy called the argumentum ad verecundiam. In any particular case, an agent will begin a dialogue with a given degree of credibility, and what is here called the credibility function will affect the plausibility of the arguments put forward by that agent. In this paper, an agent is shown to have specific character traits that are vital to properly judging how this credibility function should affect the plausibility of her arguments, including veracity, prudence, sincerity and openness to opposed arguments. When one of these traits is a relevant basis for an adjustment in a credibility function, there is a shift to a subdialogue in which the argumentation in the case is re-evaluated. In such a case, it is shown how the outcome can legitimately be a reduction in the credibility rating of the arguer who was attacked. Then it is shown how the credibility function should be brought into an argument evaluation in the case, yielding the outcome that the argument is assigned a lower plausibility value.


Author(s):  
DIANXIANG XU ◽  
RICHARD A. VOLZ ◽  
THOMAS R. IOERGER ◽  
JOHN YEN

How agents accomplish a goal task in a multi-agent system is usually specified by multi-agent plans built from basic actions (e.g. operators) of which the agents are capable. The plan specification provides the agents with a shared mental model for how they are supposed to collaborate with each other to achieve the common goal. Making sure that the plans are reliable and fit for the purpose for which they are designed is a critical problem with this approach. To address this problem, this paper presents a formal approach to modeling and analyzing multi-agent behaviors using Predicate/Transition (PrT) nets, a high- level formalism of Petri nets. We model a multi-agent problem by representing agent capabilities as transitions in PrT nets. To analyze a multi-agent PrT model, we adapt the planning graphs as a compact structure for reachability analysis, which is coherent to the concurrent semantics. We also demonstrate that one can analyze whether parallel actions specified in multi-agent plans can be executed in parallel and whether the plans can achieve the goal by analyzing the dependency relations among the transitions in the PrT model.


Sign in / Sign up

Export Citation Format

Share Document