scholarly journals Dif-MAML: Decentralized Multi-Agent Meta-Learning

Author(s):  
Mert Kayaalp ◽  
Stefan Vlaski ◽  
Ali H. Sayed
Keyword(s):  
Author(s):  
Ye Hu ◽  
Mingzhe Chen ◽  
Walid Saad ◽  
H. Vincent Poor ◽  
Shuguang Cui

Author(s):  
Wenqian Liang ◽  
Ji Wang ◽  
Weidong Bao ◽  
Xiaomin Zhu ◽  
Qingyong Wang ◽  
...  

AbstractMulti-agent reinforcement learning (MARL) methods have shown superior performance to solve a variety of real-world problems focusing on learning distinct policies for individual tasks. These approaches face problems when applied to the non-stationary real-world: agents trained in specialized tasks cannot achieve satisfied generalization performance across multiple tasks; agents have to learn and store specialized policies for individual task and reliable identities of tasks are hardly observable in practice. To address the challenge continuously adapting to multiple tasks in MARL, we formalize the problem into a two-stage curriculum. Single-task policies are learned with MARL approaches, after that we develop a gradient-based Self-Adaptive Meta-Learning algorithm, SAML, that cannot only distill single-task policies into a unified policy but also can facilitate the unified policy to continuously adapt to new incoming tasks. In addition, to validate the continuous adaptation performance on complex task, we extend the widely adopted StarCraft benchmark SMAC and develop a new multi-task multi-agent StarCraft environment, Meta-SMAC, for testing various aspects of continuous adaptation method. Our experiments with a population of agents show that our method enables significantly more efficient adaptation than reactive baselines across different scenarios.


Sign in / Sign up

Export Citation Format

Share Document