approximate policy iteration
Recently Published Documents


TOTAL DOCUMENTS

39
(FIVE YEARS 2)

H-INDEX

7
(FIVE YEARS 0)

Author(s):  
Daxue Liu ◽  
Jun Wu ◽  
Xin Xu

Multi-agent reinforcement learning (MARL) provides a useful and flexible framework for multi-agent coordination in uncertain dynamic environments. However, the generalization ability and scalability of algorithms to large problem sizes, already problematic in single-agent RL, is an even more formidable obstacle in MARL applications. In this paper, a new MARL method based on ordinal action selection and approximate policy iteration called OAPI (Ordinal Approximate Policy Iteration), is presented to address the scalability issue of MARL algorithms in common-interest Markov Games. In OAPI, an ordinal action selection and learning strategy is integrated with distributed approximate policy iteration not only to simplify the policy space and eliminate the conflicts in multi-agent coordination, but also to realize the approximation of near-optimal policies for Markov Games with large state spaces. Based on the simplified policy space using ordinal action selection, the OAPI algorithm implements distributed approximate policy iteration utilizing online least-squares policy iteration (LSPI). This resulted in multi-agent coordination with good convergence properties with reduced computational complexity. The simulation results of a coordinated multi-robot navigation task illustrate the feasibility and effectiveness of the proposed approach.


2015 ◽  
Vol 60 (11) ◽  
pp. 2989-2993 ◽  
Author(s):  
Amir-massoud Farahmand ◽  
Doina Precup ◽  
Andre M. S. Barreto ◽  
Mohammad Ghavamzadeh

Sign in / Sign up

Export Citation Format

Share Document