A Target-coupled Multiagent Reinforcement Learning Approach for Teams of Mobile Sensing Robots

Author(s):  
Xin Wang ◽  
Chuanzhi Zang ◽  
Shuqing Xu ◽  
Peng Zeng
2006 ◽  
Vol 5 (6) ◽  
pp. 1006-1011 ◽  
Author(s):  
Zhou Pu-Cheng ◽  
Hong Bing-Rong . ◽  
Huang Qing-Cheng . ◽  
Javaid Khurshid .

2008 ◽  
Vol 17 (05) ◽  
pp. 945-962 ◽  
Author(s):  
IOANNIS Partalas ◽  
IOANNIS FENERIS ◽  
IOANNIS VLAHAVAS

Reinforcement Learning comprises an attractive solution to the problem of coordinating a group of agents in a Multiagent System, due to its robustness for learning in uncertain and unknown environments. This paper proposes a multiagent Reinforcement Learning approach, that uses coordinated actions, which we call strategies and a fusing process to guide the agents. To evaluate the proposed approach, we conduct experiments in the Predator-Prey domain and compare it with other learning techniques. The results demonstrate the efficiency of the proposed approach.


Author(s):  
Sachiyo Arai

The multiagent reinforcement learning approach is now widely applied to cause agents to behave rationally in a multiagent system. However, due to the complex interactions in a multiagent domain, it is difficult to decide the each agent’s fair share of the reward for contributing to the goal achievement. This chapter reviews a reward shaping problem that defines when and what amount of reward should be given to agents. We employ keepaway soccer as a typical multiagent continuing task that requires skilled collaboration between the agents. Shaping the reward structure for this domain is difficult for the following reasons: i) a continuing task such as keepaway soccer has no explicit goal, and so it is hard to determine when a reward should be given to the agents, ii) in such a multiagent cooperative task, it is difficult to fairly share the reward for each agent‘s contribution. Through experiments, we found that reward shaping has a major effect on an agent‘s behavior.


Sign in / Sign up

Export Citation Format

Share Document