Learning Global Network Topology Using Local Information for Multi-Agent Coordination

2021 ◽  
Author(s):  
Robert A. Selje ◽  
Liang Sun
2020 ◽  
Vol 16 (3) ◽  
pp. 255-269
Author(s):  
Enrico Bozzo ◽  
Paolo Vidoni ◽  
Massimo Franceschet

AbstractWe study the stability of a time-aware version of the popular Massey method, previously introduced by Franceschet, M., E. Bozzo, and P. Vidoni. 2017. “The Temporalized Massey’s Method.” Journal of Quantitative Analysis in Sports 13: 37–48, for rating teams in sport competitions. To this end, we embed the temporal Massey method in the theory of time-varying averaging algorithms, which are dynamic systems mainly used in control theory for multi-agent coordination. We also introduce a parametric family of Massey-type methods and show that the original and time-aware Massey versions are, in some sense, particular instances of it. Finally, we discuss the key features of this general family of rating procedures, focusing on inferential and predictive issues and on sensitivity to upsets and modifications of the schedule.


Author(s):  
Daxue Liu ◽  
Jun Wu ◽  
Xin Xu

Multi-agent reinforcement learning (MARL) provides a useful and flexible framework for multi-agent coordination in uncertain dynamic environments. However, the generalization ability and scalability of algorithms to large problem sizes, already problematic in single-agent RL, is an even more formidable obstacle in MARL applications. In this paper, a new MARL method based on ordinal action selection and approximate policy iteration called OAPI (Ordinal Approximate Policy Iteration), is presented to address the scalability issue of MARL algorithms in common-interest Markov Games. In OAPI, an ordinal action selection and learning strategy is integrated with distributed approximate policy iteration not only to simplify the policy space and eliminate the conflicts in multi-agent coordination, but also to realize the approximation of near-optimal policies for Markov Games with large state spaces. Based on the simplified policy space using ordinal action selection, the OAPI algorithm implements distributed approximate policy iteration utilizing online least-squares policy iteration (LSPI). This resulted in multi-agent coordination with good convergence properties with reduced computational complexity. The simulation results of a coordinated multi-robot navigation task illustrate the feasibility and effectiveness of the proposed approach.


Sign in / Sign up

Export Citation Format

Share Document