An Efficient Argumentation Framework for Negotiating Autonomous Agents

Author(s):  
Michael Schroeder
2021 ◽  
pp. 1-39
Author(s):  
Alison R. Panisson ◽  
Peter McBurney ◽  
Rafael H. Bordini

There are many benefits of using argumentation-based techniques in multi-agent systems, as clearly shown in the literature. Such benefits come not only from the expressiveness that argumentation-based techniques bring to agent communication but also from the reasoning and decision-making capabilities under conditions of conflicting and uncertain information that argumentation enables for autonomous agents. When developing multi-agent applications in which argumentation will be used to improve agent communication and reasoning, argumentation schemes (reasoning patterns for argumentation) are useful in addressing the requirements of the application domain in regards to argumentation (e.g., defining the scope in which argumentation will be used by agents in that particular application). In this work, we propose an argumentation framework that takes into account the particular structure of argumentation schemes at its core. This paper formally defines such a framework and experimentally evaluates its implementation for both argumentation-based reasoning and dialogues.


2017 ◽  
Author(s):  
Eugenia Isabel Gorlin ◽  
Michael W. Otto

To live well in the present, we take direction from the past. Yet, individuals may engage in a variety of behaviors that distort their past and current circumstances, reducing the likelihood of adaptive problem solving and decision making. In this article, we attend to self-deception as one such class of behaviors. Drawing upon research showing both the maladaptive consequences and self-perpetuating nature of self-deception, we propose that self-deception is an understudied risk and maintaining factor for psychopathology, and we introduce a “cognitive-integrity”-based approach that may hold promise for increasing the reach and effectiveness of our existing therapeutic interventions. Pending empirical validation of this theoretically-informed approach, we posit that patients may become more informed and autonomous agents in their own therapeutic growth by becoming more honest with themselves.


1999 ◽  
pp. 38-65 ◽  
Author(s):  
Margaret Morrison
Keyword(s):  

2010 ◽  
Vol 83 (10) ◽  
pp. 1838-1850 ◽  
Author(s):  
Wenpin Jiao ◽  
Yanchun Sun ◽  
Hong Mei

2021 ◽  
Vol 10 (2) ◽  
pp. 27
Author(s):  
Roberto Casadei ◽  
Gianluca Aguzzi ◽  
Mirko Viroli

Research and technology developments on autonomous agents and autonomic computing promote a vision of artificial systems that are able to resiliently manage themselves and autonomously deal with issues at runtime in dynamic environments. Indeed, autonomy can be leveraged to unburden humans from mundane tasks (cf. driving and autonomous vehicles), from the risk of operating in unknown or perilous environments (cf. rescue scenarios), or to support timely decision-making in complex settings (cf. data-centre operations). Beyond the results that individual autonomous agents can carry out, a further opportunity lies in the collaboration of multiple agents or robots. Emerging macro-paradigms provide an approach to programming whole collectives towards global goals. Aggregate computing is one such paradigm, formally grounded in a calculus of computational fields enabling functional composition of collective behaviours that could be proved, under certain technical conditions, to be self-stabilising. In this work, we address the concept of collective autonomy, i.e., the form of autonomy that applies at the level of a group of individuals. As a contribution, we define an agent control architecture for aggregate multi-agent systems, discuss how the aggregate computing framework relates to both individual and collective autonomy, and show how it can be used to program collective autonomous behaviour. We exemplify the concepts through a simulated case study, and outline a research roadmap towards reliable aggregate autonomy.


Author(s):  
László Bernáth

AbstractIt is an increasingly popular view among philosophers that moral responsibility can, in principle, be attributed to unconscious autonomous agents. This trend is already remarkable in itself, but it is even more interesting that most proponents of this view provide more or less the same argument to support their position. I argue that as it stands, the Extension Argument, as I call it, is not sufficient to establish the thesis that unconscious autonomous agents can be morally responsible. I attempt to show that the Extension Argument should overcome especially strong ethical considerations; moreover, its epistemological grounds are not too solid, partly because the justifications of its premises are in conflict.


Author(s):  
Sam Hepenstal ◽  
Leishi Zhang ◽  
Neesha Kodogoda ◽  
B.L. William Wong

Criminal investigations are guided by repetitive and time-consuming information retrieval tasks, often with high risk and high consequence. If Artificial intelligence (AI) systems can automate lines of inquiry, it could reduce the burden on analysts and allow them to focus their efforts on analysis. However, there is a critical need for algorithmic transparency to address ethical concerns. In this paper, we use data gathered from Cognitive Task Analysis (CTA) interviews of criminal intelligence analysts and perform a novel analysis method to elicit question networks. We show how these networks form an event tree, where events are consolidated by capturing analyst intentions. The event tree is simplified with a Dynamic Chain Event Graph (DCEG) that provides a foundation for transparent autonomous investigations.


Sign in / Sign up

Export Citation Format

Share Document