scholarly journals Multi-Agents Collaboration in Open System

Author(s):  
Zina Houhamdi ◽  
Belkacem Athamena

Share constrained resources, accomplish complex tasks and achieve shared or individual goals are examples requiring collaboration between agents in multi-agent systems. The collaboration necessitates an effective team composed of a set of agents that do not have conflicting goals and express their willingness to cooperate. In such a team, the complex task is split into simple tasks, and each agent performs its assigned task to contribute to the fulfilment of the complex task. Nevertheless, team formation is challenging, especially in an open system that consists of self-interested agents performing tasks to achieve several simultaneous goals, usually clashing, by sharing constrained resources. The clashing goals obstruct the collaboration's success since the self-interested agent prefers its individual goals to the team’s shared goal. In open systems, the collaboration team construction process is impacted by the Multi-Agent System (MAS) model, the collaboration’s target, and dependencies between agents’ goals. This study investigates how to allow agents to build collaborative teams to realize a set of goals concurrently in open systems with constrained resources. This paper proposes a fully distributed approach to model the Collaborative Team Construction Model (CTCM). CTCM modifies the social reasoning model to allow agents to achieve their individual and shared goals concurrently by sharing resources in an open MAS by constructing collaborative teams. Each agent shares partial information (to preserve privacy) and models its goal relationships. The proposed team construction approach supports a distributed decision-making process. In CTCM, the agent adapts its self-interest level and adjusts its willingness to form an effective collaborative team.

Games ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 8
Author(s):  
Gustavo Chica-Pedraza ◽  
Eduardo Mojica-Nava ◽  
Ernesto Cadena-Muñoz

Multi-Agent Systems (MAS) have been used to solve several optimization problems in control systems. MAS allow understanding the interactions between agents and the complexity of the system, thus generating functional models that are closer to reality. However, these approaches assume that information between agents is always available, which means the employment of a full-information model. Some tendencies have been growing in importance to tackle scenarios where information constraints are relevant issues. In this sense, game theory approaches appear as a useful technique that use a strategy concept to analyze the interactions of the agents and achieve the maximization of agent outcomes. In this paper, we propose a distributed control method of learning that allows analyzing the effect of the exploration concept in MAS. The dynamics obtained use Q-learning from reinforcement learning as a way to include the concept of exploration into the classic exploration-less Replicator Dynamics equation. Then, the Boltzmann distribution is used to introduce the Boltzmann-Based Distributed Replicator Dynamics as a tool for controlling agents behaviors. This distributed approach can be used in several engineering applications, where communications constraints between agents are considered. The behavior of the proposed method is analyzed using a smart grid application for validation purposes. Results show that despite the lack of full information of the system, by controlling some parameters of the method, it has similar behavior to the traditional centralized approaches.


2013 ◽  
Vol 2013 ◽  
pp. 1-11
Author(s):  
Zhengxin Wang ◽  
Yang Cao

This paper studies the consensus problem for a high-order multi-agent systems without or with delays. Consensus protocols, which only depend on the own partial information of agents and partial relative information with its neighbors, are proposed for consensus and quasi-consensus, respectively. Firstly, some lemmas are presented, and then a necessary and sufficient condition for guaranteeing the consensus is established under the consensus protocol without delays. Furthermore, communication delays are considered. Some necessary and sufficient conditions for solving quasi-consensus problem with delays are obtained. Finally, some simulations are given to verify the theoretical results.


2013 ◽  
Vol 457-458 ◽  
pp. 1069-1073
Author(s):  
Lei Ding

This paper investigates the consensus problem of multi-agent systems with partial information transmission under an undirected topology. A distributed consensus protocol is proposed with local velocity feedback and the position information from neighbors. The consensus problem is converted to the stabilization problem by transforming the original systems into a reduced order state system. Then, by using graph theory and Jurys stability test, a sufficient and necessary condition for consensus of multi-agent systems is derived. An example is given to illustrate the effectiveness of the presented results.


2018 ◽  
Vol 18 (2) ◽  
pp. 123-132 ◽  
Author(s):  
Reem Abdalla ◽  
Alok Mishra

Abstract This paper carries out a comparative analysis to determine the advantages and the stages of two agent-based methodologies: Multi-agent Systems Engineering (MaSE) methodology, which is designed specifically for an agent-based and complete lifecycle approach, while also being appropriate for understanding and developing complex open systems; Agent Systems Methodology (ASEME) suggests a modular Multi-Agent System (MAS) development approach and uses the concept of intra-agent control. We also examine the strengths and weaknesses of these methodologies and the dependencies between their models and their processes. Both methodologies are applied to develop The Guardian Angle: Patient-Centered Health Information System (GA: PCHIS), which is an example of agent-based applications used to improve health care information systems.


2012 ◽  
Vol 433-440 ◽  
pp. 7357-7361 ◽  
Author(s):  
Wei Hong Yu

Open systems consist of autonomous and heterogeneous agents who interact in order to exchange or negotiate information, knowledge, and services. The principal challenge in agent communication research is to enable flexible and efficient communication among agents. Jade follows FIPA standards so that ideally Jade agents could interact with agents written in other languages and running on other platforms. From the view of message content, there are three ways to implement communication between agents. This paper mainly did some research on how to implement communication between Jade agents using serialized Java objects. Serialization is Java's built-in mechanism for transforming an object graph into a series of bytes, which can then be sent over the network or stored in a file. The research relies on the Project Supported by Innovative Research Team in University of LiaoNing Province: Maritime search and rescue decision support system based on multi-agent. The results show that in a distributed network programming, particularly for transmitting complex objects in multi-agent systems, the use of object serialization is very flexible. But it is also very limited. It must store and retrieve the entire object graph at once, making it unsuitable for dealing with large amounts of data.


2011 ◽  
Vol 26 (1) ◽  
pp. 53-59 ◽  
Author(s):  
Andrea Omicini ◽  
Mirko Viroli

AbstractStarting from the pioneering work on Lindaand Gamma, coordination models and languages have gone through an amazing evolution process over the years. From closed to open systems, from parallel computing to multi-agent systems and from database integration to knowledge-intensive environments, coordination abstractions and technologies have gained in relevance and power in those scenarios where complexity has become a key factor. In this paper, we outline and motivate 25 years of evolution of coordination models and languages, and discuss their potential perspectives in the future of artificial systems.


Sign in / Sign up

Export Citation Format

Share Document