A Hierarchical Reinforcement Learning Based Approach for Multi-robot Cooperation in Unknown Environments

Author(s):  
Yifan Cai ◽  
Simon X. Yang ◽  
Xin Xu ◽  
Gauri S. Mittal
2011 ◽  
Vol 216 ◽  
pp. 75-80 ◽  
Author(s):  
Chang An Liu ◽  
Fei Liu ◽  
Chun Yang Liu ◽  
Hua Wu

To solve the curse of dimensionality problem in multi-agent reinforcement learning, a learning method based on k-means is presented in this paper. In this method, the environmental state is represented as key state factors. The state space explosion is avoided by classifying states into different clusters using k-means. The learning rate is improved by assigning different states to existent clusters, as well as corresponding strategy. Compared to traditional Q-learning, our experimental results of the multi-robot cooperation show that our scheme improves the team learning ability efficiently. Meanwhile, the cooperation efficiency can be enhanced successfully.


2021 ◽  
pp. 115795
Author(s):  
Hongwei Tang ◽  
Wei Sun ◽  
Anping Lin ◽  
Min Xue ◽  
Xing Zhang

Author(s):  
Yifan Cai ◽  
Simon X. Yang

Cooperative exploration in unknown environments is fundamentally important in robotics, where the real-time path planning and proper task allocation strategies are the key issues for multi-robot cooperation. In this paper, a PSO-based approach, combined with a fuzzy obstacle avoidance module, is proposed for cooperative robots to accomplish target searching and foraging tasks in unknown environments. The proposed cooperation strategy for a multi-robot system makes use of the potential field function as the fitness function of PSO, while the proposed fuzzy obstacle-avoidance module improves the smoothness of robot trajectory. In the simulation studies, several scenarios with and without the fuzzy module are investigated. The robot trajectory smoothness improvement is demonstrated through the comparative studies.


Sign in / Sign up

Export Citation Format

Share Document