Argument & Computation
Latest Publications


TOTAL DOCUMENTS

159
(FIVE YEARS 45)

H-INDEX

16
(FIVE YEARS 2)

Published By Ios Press

1946-2174, 1946-2166

2021 ◽  
pp. 1-34
Author(s):  
Jean-Guy Mailly

Abstract argumentation, as originally defined by Dung, is a model that allows the description of certain information about arguments and relationships between them: in an abstract argumentation framework (AF), the agent knows for sure whether a given argument or attack exists. It means that the absence of an attack between two arguments can be interpreted as “we know that the first argument does not attack the second one”. But the question of uncertainty in abstract argumentation has received much attention in the last years. In this paper, we survey approaches that allow to express information like “There may (or may not) be an attack between these arguments”. We describe the main models that incorporate qualitative uncertainty (or ignorance) in abstract argumentation, as well as some applications of these models. We also highlight some open questions that deserve some attention in the future.


2021 ◽  
pp. 1-39
Author(s):  
Alison R. Panisson ◽  
Peter McBurney ◽  
Rafael H. Bordini

There are many benefits of using argumentation-based techniques in multi-agent systems, as clearly shown in the literature. Such benefits come not only from the expressiveness that argumentation-based techniques bring to agent communication but also from the reasoning and decision-making capabilities under conditions of conflicting and uncertain information that argumentation enables for autonomous agents. When developing multi-agent applications in which argumentation will be used to improve agent communication and reasoning, argumentation schemes (reasoning patterns for argumentation) are useful in addressing the requirements of the application domain in regards to argumentation (e.g., defining the scope in which argumentation will be used by agents in that particular application). In this work, we propose an argumentation framework that takes into account the particular structure of argumentation schemes at its core. This paper formally defines such a framework and experimentally evaluates its implementation for both argumentation-based reasoning and dialogues.


2021 ◽  
pp. 1-41
Author(s):  
Atefeh Keshavarzi Zafarghandi ◽  
Rineke Verbrugge ◽  
Bart Verheij

Abstract dialectical frameworks (ADFs) have been introduced as a formalism for modeling argumentation allowing general logical satisfaction conditions and the relevant argument evaluation. Different criteria used to settle the acceptance of arguments are called semantics. Semantics of ADFs have so far mainly been defined based on the concept of admissibility. However, the notion of strongly admissible semantics studied for abstract argumentation frameworks has not yet been introduced for ADFs. In the current work we present the concept of strong admissibility of interpretations for ADFs. Further, we show that strongly admissible interpretations of ADFs form a lattice with the grounded interpretation as the maximal element. We also present algorithms to answer the following decision problems: (1) whether a given interpretation is a strongly admissible interpretation of a given ADF, and (2) whether a given argument is strongly acceptable/deniable in a given interpretation of a given ADF. In addition, we show that the strongly admissible semantics of ADFs forms a proper generalization of the strongly admissible semantics of AFs.


2021 ◽  
pp. 1-26
Author(s):  
Ryan Phillip Quandt ◽  
John Licato

Argumentation schemes bring artificial intelligence into day to day conversation. Interpreting the force of an utterance, be it an assertion, command, or question, remains a task for achieving this goal. But it is not an easy task. An interpretation of force depends on a speaker’s use of words for a hearer at the moment of utterance. Ascribing force relies on grammatical mood, though not in a straightforward or regular way. We face a dilemma: on one hand, deciding force requires an understanding of the speaker’s words; on the other hand, word meaning may shift given the force in which the words are spoken. A precise theory of how mood and force relate helps us handle this dilemma, which, if met, expands the use of argumentation schemes in language processing. Yet, as our analysis shows, force is an inconstant variable, one that contributes to a scheme’s defeasibility. We propose using critical questions to help us decide the force of utterances.


2021 ◽  
pp. 1-20
Author(s):  
Nancy L. Green

Argumentation schemes have played a key role in our research projects on computational models of natural argument over the last decade. The catalogue of schemes in Walton, Reed and Macagno’s 2008 book, Argumentation Schemes, served as our starting point for analysis of the naturally occurring arguments in written text, i.e., text in different genres having different types of author, audience, and subject domain (genetics, international relations, environmental science policy, AI ethics), for different argument goals, and for different possible future applications. We would often first attempt to analyze the arguments in our corpora in terms of those schemes, then adapt schemes as needed for the goals of the project, and in some cases implement them for use in computational models. Among computational researchers, the main interest in argumentation schemes has been for use in argument mining by applying machine learning methods to existing argument corpora. In contrast, a primary goal of our research has been to learn more about written arguments themselves in various contemporary fields. Our approach has been to manually analyze semantics, discourse structure, argumentation, and rhetoric in texts. Another goal has been to create sharable digital corpora containing the results of our studies. Our approach has been to define argument schemes for use by human corpus annotators or for use in logic programs for argument mining. The third goal is to design useful computer applications based upon our studies, such as argument diagramming systems that provide argument schemes as building blocks. This paper describes each of the various projects: the methods, the argument schemes that were identified, and how they were used. Then a synthesis of the results is given with a discussion of open issues.


2021 ◽  
pp. 1-27
Author(s):  
Isabel Sassoon ◽  
Nadin Kökciyan ◽  
Sanjay Modgil ◽  
Simon Parsons

This paper demonstrates how argumentation schemes can be used in decision support systems that help clinicians in making treatment decisions. The work builds on the use of computational argumentation, a rigorous approach to reasoning with complex data that places strong emphasis on being able to justify and explain the decisions that are recommended. The main contribution of the paper is to present a novel set of specialised argumentation schemes that can be used in the context of a clinical decision support system to assist in reasoning about what treatments to offer. These schemes provide a mechanism for capturing clinical reasoning in such a way that it can be handled by the formal reasoning mechanisms of formal argumentation. The paper describes how the integration between argumentation schemes and formal argumentation may be carried out, sketches how this is achieved by an implementation that we have created and illustrates the overall process on a small set of case studies.


2021 ◽  
pp. 1-37
Author(s):  
Phan Minh Thang ◽  
Phan Minh Dung ◽  
Jiraporn Pooksook
Keyword(s):  

We study the semantics of dialectical proof procedures. As dialectical proof procedures are in general sound but not complete wrt admissibility semantics, a natural question here is whether we could give a more precise semantical characterization of what they compute. Based on a new notion of infinite arguments representing (possibly infinite) loops, we introduce a stricter notion of admissibility, referred to as strict admissibility, and show that dialectical proof procedures are in general sound and complete wrt strict admissibility.


2021 ◽  
pp. 1-14
Author(s):  
Bruno Yun ◽  
Srdjan Vesic ◽  
Nir Oren

In this paper we describe an argumentation-based representation of normal form games, and demonstrate how argumentation can be used to compute pure strategy Nash equilibria. Our approach builds on Modgil’s Extended Argumentation Frameworks. We demonstrate its correctness, showprove several theoretical properties it satisfies, and outline how it can be used to explain why certain strategies are Nash equilibria to a non-expert human user.


2021 ◽  
pp. 1-36
Author(s):  
Henry Prakken ◽  
Rosa Ratsma

This paper proposes a formal top-level model of explaining the outputs of machine-learning-based decision-making applications and evaluates it experimentally with three data sets. The model draws on AI & law research on argumentation with cases, which models how lawyers draw analogies to past cases and discuss their relevant similarities and differences in terms of relevant factors and dimensions in the problem domain. A case-based approach is natural since the input data of machine-learning applications can be seen as cases. While the approach is motivated by legal decision making, it also applies to other kinds of decision making, such as commercial decisions about loan applications or employee hiring, as long as the outcome is binary and the input conforms to this paper’s factor- or dimension format. The model is top-level in that it can be extended with more refined accounts of similarities and differences between cases. It is shown to overcome several limitations of similar argumentation-based explanation models, which only have binary features and do not represent the tendency of features towards particular outcomes. The results of the experimental evaluation studies indicate that the model may be feasible in practice, but that further development and experimentation is needed to confirm its usefulness as an explanation model. Main challenges here are selecting from a large number of possible explanations, reducing the number of features in the explanations and adding more meaningful information to them. It also remains to be investigated how suitable our approach is for explaining non-linear models.


Sign in / Sign up

Export Citation Format

Share Document