Architecture for Belief Revision in Multi-Agent Intelligent Systems

Author(s):  
Stanislav Ustymenko ◽  
Daniel G. Schwartz
2021 ◽  
Vol 17 (3) ◽  
pp. 88-99
Author(s):  
Roderic A. Girle

Three foundational principles are introduced: intelligent systems such as those that would pass the Turing test should display multi-agent or interactional intelligence; multi-agent systems should be based on conceptual structures common to all interacting agents, machine and human; and multi-agent systems should have an underlying interactional logic such as dialogue logic. In particular, a multi-agent rather than an orthodox analysis of the key concepts of knowledge and belief is discussed. The contrast that matters is the difference between the different questions and answers about the support for claims to know and claims to believe. A simple multi-agent system based on dialogue theory which provides for such a difference is set out.


Author(s):  
LAURENT PERRUSSEL ◽  
JEAN-MARC THÉVENIN

This paper focuses on the features of belief change in a multi-agent context where agents consider beliefs and disbeliefs. Disbeliefs represent explicit ignorance and are useful to prevent agents to entail conclusions due to their ignorance. Agents receive messages holding information from other agents and change their belief state accordingly. An agent may refuse to adopt incoming information if it prefers its own (dis)beliefs. For this, each agent maintains a preference relation over its own beliefs and disbeliefs in order to decide if it accepts or rejects incoming information whenever inconsistencies occur. This preference relation may be built by considering several criteria such as the reliability of the sender of statements or temporal aspects. This process leads to non-prioritized belief revision. In this context we first present the * and − operators which allow an agent to revise, respectively contract, its belief state in a non-prioritized way when it receives an incoming belief, respectively disbelief. We show that these operators behave properly. Based on this we then illustrate how the receiver and the sender may argue when the incoming (dis)belief is refused. We describe pieces of dialog where (i) the sender tries to convince the receiver by sending arguments in favor of the original (dis)belief and (ii) the receiver justifies its refusal by sending arguments against the original (dis)belief. We show that the notion of acceptability of these arguments can be represented in a simple way by using the non-prioritized change operators * and −. The advantage of argumentation dialogs is twofold. First whenever arguments are acceptable the sender or the receiver reconsider its belief state; the main result is an improvement of the reconsidered belief state. Second the sender may not be aware of some sets of rules which act as constraints to reach a specific conclusion and discover them through argumentation dialogs.


2021 ◽  
Vol 2094 (3) ◽  
pp. 032033
Author(s):  
I A Kirikov ◽  
S V Listopad ◽  
A S Luchko

Abstract The paper proposes the model for negotiating intelligent agents’ ontologies in cohesive hybrid intelligent multi-agent systems. Intelligent agent in this study will be called relatively autonomous software entity with developed domain models and goal-setting mechanisms. When such agents have to work together within single hybrid intelligent multi-agent systems to solve some problem, the working process “go wild”, if there are significant differences between the agents’ “points of view” on the domain, goals and rules of joint work. In this regard, in order to reduce labor costs for integrating intelligent agents into a single system, the concept of cohesive hybrid intelligent multi-agent systems was proposed that implement mechanisms for negotiating goals, domain models and building a protocol for solving the problems posed. The presence of these mechanisms is especially important when building intelligent systems from intelligent agents created by various independent development teams.


Author(s):  
Nadjib Mesbahi ◽  
Okba Kazar ◽  
Saber Benharzallah ◽  
Merouane Zoubeidi ◽  
Djamil Rezki

Multi-agent systems (MAS) are a powerful technology for the design and implementation of autonomous intelligent systems that can handle distributed problem solving in a complex environment. This technology has played an important role in the development of data mining systems in the last decade, the purpose of which is to promote the extraction of information and knowledge from a large database and to make these systems more scalable. In this chapter, the authors present a clustering system based on cooperative agents through a centralized and common ERP database to improve decision support in ERP systems. To achieve this, they use multi-agent system paradigm to distribute the complexity of k-means algorithm in several autonomous entities called agents, whose goal is to group records or observations on similar objects classes. This will help business decision makers to make good decisions and provide a very good response time by the use of the multi-agent system. To implement the proposed architecture, it is more convenient to use the JADE platform while providing a complete set of services and have agents comply with the specifications FIPA.


2008 ◽  
pp. 1360-1367
Author(s):  
Cesar Analide ◽  
Paulo Novais ◽  
José Machado ◽  
José Neves

The work done by some authors in the fields of computer science, artificial intelligence, and multi-agent systems foresees an approximation of these disciplines and those of the social sciences, namely, in the areas of anthropology, sociology, and psychology. Much of this work has been done in terms of the humanization of the behavior of virtual entities by expressing human-like feelings and emotions. Some authors (e.g., Ortony, Clore & Collins, 1988; Picard, 1997) suggest lines of action considering ways to assign emotions to machines. Attitudes like cooperation, competition, socialization, and trust are explored in many different areas (Arthur, 1994; Challet & Zhang, 1998; Novais et al., 2004). Other authors (e.g., Bazzan et al., 2000; Castelfranchi, Rosis & Falcone, 1997) recognize the importance of modeling virtual entity mental states in an anthropopathic way. Indeed, an important motivation to the development of this project comes from the author’s work with artificial intelligence in the area of knowledge representation and reasoning, in terms of an extension to the language of logic programming, that is, the Extended Logic Programming (Alferes, Pereira & Przymusinski, 1998; Neves, 1984). On the other hand, the use of null values to deal with imperfect knowledge (Gelfond, 1994; Traylor & Gelfond, 1993) and the enforcement of exceptions to characterize the behavior of intelligent systems (Analide, 2004) is another justification for the adoption of these formalisms in this knowledge arena. Knowledge representation, as a way to describe the real world based on mechanical, logical, or other means, will always be a function of the systems ability to describe the existent knowledge and their associated reasoning mechanisms. Indeed, in the conception of a knowledge representation system, it must be taken into attention different instances of knowledge.


2020 ◽  
Vol 35 (1) ◽  
Author(s):  
Roberta Calegari ◽  
Giovanni Ciatto ◽  
Viviana Mascardi ◽  
Andrea Omicini

Abstract Precisely when the success of artificial intelligence (AI) sub-symbolic techniques makes them be identified with the whole AI by many non-computer-scientists and non-technical media, symbolic approaches are getting more and more attention as those that could make AI amenable to human understanding. Given the recurring cycles in the AI history, we expect that a revamp of technologies often tagged as “classical AI”—in particular, logic-based ones—will take place in the next few years. On the other hand, agents and multi-agent systems (MAS) have been at the core of the design of intelligent systems since their very beginning, and their long-term connection with logic-based technologies, which characterised their early days, might open new ways to engineer explainable intelligent systems. This is why understanding the current status of logic-based technologies for MAS is nowadays of paramount importance. Accordingly, this paper aims at providing a comprehensive view of those technologies by making them the subject of a systematic literature review (SLR). The resulting technologies are discussed and evaluated from two different perspectives: the MAS and the logic-based ones.


Robotics ◽  
2019 ◽  
Vol 8 (2) ◽  
pp. 25
Author(s):  
Arturs Ardavs ◽  
Mara Pudane ◽  
Egons Lavendelis ◽  
Agris Nikitenko

This paper proposes a long-term adaptive distributed intelligent systems model which combines an organization theory and multi-agent paradigm—ViaBots. Currently, the need for adaptivity in autonomous intelligent systems becomes crucial due to the increase in the complexity and diversity of the tasks that autonomous robots are employed for. To deal with the design complexity of such systems within the ViaBots model, each part of the modeled system is designed as an autonomous agent and the entire model, as a multi-agent system. Based on the viable system model, which is widely used to ensure viability, (i.e., long-term autonomy of organizations), the ViaBots model defines the necessary roles a system must fulfill to be capable to adapt both to changes in its environment (like changes in the task) and changes within the system itself (like availability of a particular robot). Along with static role assignments, ViaBots propose a mechanism for role transition from one agent to another as one of the key elements of long term adaptivity. The model has been validated in a simulated environment using an example of a conveyor system. The simulated model enabled the multi-robot system to adapt to the quantity and characteristics of the available robots, as well as to the changes in the parts to be processed by the system.


Sign in / Sign up

Export Citation Format

Share Document