Learning Automata-Based Multiagent Reinforcement Learning for Optimization of Cooperative Tasks

Author(s):  
Zhen Zhang ◽  
Dongqing Wang ◽  
Junwei Gao
2017 ◽  
Vol 47 (6) ◽  
pp. 1367-1379 ◽  
Author(s):  
Zhen Zhang ◽  
Dongbin Zhao ◽  
Junwei Gao ◽  
Dongqing Wang ◽  
Yujie Dai

2021 ◽  
Vol 18 (5) ◽  
pp. 172988142110449
Author(s):  
Haolin Wu ◽  
Hui Li ◽  
Jianwei Zhang ◽  
Zhuang Wang ◽  
Jianeng Zhang

Multiagent reinforcement learning holds considerable promise to deal with cooperative multiagent tasks. Unfortunately, the only global reward shared by all agents in the cooperative tasks may lead to the lazy agent problem. To cope with such a problem, we propose a generating individual intrinsic reward algorithm, which introduces an intrinsic reward encoder to generate an individual intrinsic reward for each agent and utilizes the hypernetworks as the decoder to help to estimate the individual action values of the decomposition methods based on the generated individual intrinsic reward. Experimental results in the StarCraft II micromanagement benchmark prove that the proposed algorithm can increase learning efficiency and improve policy performance.


2020 ◽  
Author(s):  
Felipe Leno Da Silva ◽  
Anna Helena Reali Costa

Reinforcement Learning (RL) is a powerful tool that has been used to solve increasingly complex tasks. RL operates through repeated interactions of the learning agent with the environment, via trial and error. However, this learning process is extremely slow, requiring many interactions. In this thesis, we leverage previous knowledge so as to accelerate learning in multiagent RL problems. We propose knowledge reuse both from previous tasks and from other agents. Several flexible methods are introduced so that each of these two types of knowledge reuse is possible. This thesis adds important steps towards more flexible and broadly applicable multiagent transfer learning methods.


Sign in / Sign up

Export Citation Format

Share Document