An efficient lightweight coordination model to multi-agent planning

Author(s):  
Leonardo Henrique Moreira ◽  
Célia Ghedini Ralha
2005 ◽  
Vol 36 (4) ◽  
pp. 266-272 ◽  
Author(s):  
Xu Rui ◽  
Cui Pingyuan ◽  
Xu Xiaofei

2006 ◽  
pp. 301-325 ◽  
Author(s):  
Michael Bowling ◽  
Rune Jensen ◽  
Manuela Veloso

2018 ◽  
Vol 32 (6) ◽  
pp. 779-821
Author(s):  
Shlomi Maliah ◽  
Guy Shani ◽  
Roni Stern

Author(s):  
Yanlin Han ◽  
Piotr Gmytrasiewicz

This paper introduces the IPOMDP-net, a neural network architecture for multi-agent planning under partial observability. It embeds an interactive partially observable Markov decision process (I-POMDP) model and a QMDP planning algorithm that solves the model in a neural network architecture. The IPOMDP-net is fully differentiable and allows for end-to-end training. In the learning phase, we train an IPOMDP-net on various fixed and randomly generated environments in a reinforcement learning setting, assuming observable reinforcements and unknown (randomly initialized) model functions. In the planning phase, we test the trained network on new, unseen variants of the environments under the planning setting, using the trained model to plan without reinforcements. Empirical results show that our model-based IPOMDP-net outperforms the other state-of-the-art modelfree network and generalizes better to larger, unseen environments. Our approach provides a general neural computing architecture for multi-agent planning using I-POMDPs. It suggests that, in a multi-agent setting, having a model of other agents benefits our decision-making, resulting in a policy of higher quality and better generalizability.


Sign in / Sign up

Export Citation Format

Share Document