Multi-agent Coordination for Data Gathering with Periodic Requests and Deliveries

Author(s):  
Yaroslav Marchukov ◽  
Luis Montano
2020 ◽  
Vol 16 (3) ◽  
pp. 255-269
Author(s):  
Enrico Bozzo ◽  
Paolo Vidoni ◽  
Massimo Franceschet

AbstractWe study the stability of a time-aware version of the popular Massey method, previously introduced by Franceschet, M., E. Bozzo, and P. Vidoni. 2017. “The Temporalized Massey’s Method.” Journal of Quantitative Analysis in Sports 13: 37–48, for rating teams in sport competitions. To this end, we embed the temporal Massey method in the theory of time-varying averaging algorithms, which are dynamic systems mainly used in control theory for multi-agent coordination. We also introduce a parametric family of Massey-type methods and show that the original and time-aware Massey versions are, in some sense, particular instances of it. Finally, we discuss the key features of this general family of rating procedures, focusing on inferential and predictive issues and on sensitivity to upsets and modifications of the schedule.


2017 ◽  
Vol 13 (1) ◽  
pp. 155014771668484 ◽  
Author(s):  
Huthiafa Q Qadori ◽  
Zuriati A Zulkarnain ◽  
Zurina Mohd Hanapi ◽  
Shamala Subramaniam

Recently, wireless sensor networks have employed the concept of mobile agent to reduce energy consumption and obtain effective data gathering. Typically, in data gathering based on mobile agent, it is an important and essential step to find out the optimal itinerary planning for the mobile agent. However, single-agent itinerary planning suffers from two primary disadvantages: task delay and large size of mobile agent as the scale of the network is expanded. Thus, using multi-agent itinerary planning overcomes the drawbacks of single-agent itinerary planning. Despite the advantages of multi-agent itinerary planning, finding the optimal number of distributed mobile agents, source nodes grouping, and optimal itinerary of each mobile agent for simultaneous data gathering are still regarded as critical issues in wireless sensor network. Therefore, in this article, the existing algorithms that have been identified in the literature to address the above issues are reviewed. The review shows that most of the algorithms used one parameter to find the optimal number of mobile agents in multi-agent itinerary planning without utilizing other parameters. More importantly, the review showed that theses algorithms did not take into account the security of the data gathered by the mobile agent. Accordingly, we indicated the limitations of each proposed algorithm and new directions are provided for future research.


Author(s):  
Daxue Liu ◽  
Jun Wu ◽  
Xin Xu

Multi-agent reinforcement learning (MARL) provides a useful and flexible framework for multi-agent coordination in uncertain dynamic environments. However, the generalization ability and scalability of algorithms to large problem sizes, already problematic in single-agent RL, is an even more formidable obstacle in MARL applications. In this paper, a new MARL method based on ordinal action selection and approximate policy iteration called OAPI (Ordinal Approximate Policy Iteration), is presented to address the scalability issue of MARL algorithms in common-interest Markov Games. In OAPI, an ordinal action selection and learning strategy is integrated with distributed approximate policy iteration not only to simplify the policy space and eliminate the conflicts in multi-agent coordination, but also to realize the approximation of near-optimal policies for Markov Games with large state spaces. Based on the simplified policy space using ordinal action selection, the OAPI algorithm implements distributed approximate policy iteration utilizing online least-squares policy iteration (LSPI). This resulted in multi-agent coordination with good convergence properties with reduced computational complexity. The simulation results of a coordinated multi-robot navigation task illustrate the feasibility and effectiveness of the proposed approach.


Sign in / Sign up

Export Citation Format

Share Document