Multi-robot multi-target dynamic path planning using artificial bee colony and evolutionary programming in unknown environment

2018 ◽  
Vol 11 (2) ◽  
pp. 171-186 ◽  
Author(s):  
Abdul Qadir Faridi ◽  
Sanjeev Sharma ◽  
Anupam Shukla ◽  
Ritu Tiwari ◽  
Joydip Dhar
2015 ◽  
Vol 30 ◽  
pp. 319-328 ◽  
Author(s):  
Marco A. Contreras-Cruz ◽  
Victor Ayala-Ramirez ◽  
Uriel H. Hernandez-Belmonte

2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Xiaoyun Lei ◽  
Zhian Zhang ◽  
Peifang Dong

Dynamic path planning of unknown environment has always been a challenge for mobile robots. In this paper, we apply double Q-network (DDQN) deep reinforcement learning proposed by DeepMind in 2016 to dynamic path planning of unknown environment. The reward and punishment function and the training method are designed for the instability of the training stage and the sparsity of the environment state space. In different training stages, we dynamically adjust the starting position and target position. With the updating of neural network and the increase of greedy rule probability, the local space searched by agent is expanded. Pygame module in PYTHON is used to establish dynamic environments. Considering lidar signal and local target position as the inputs, convolutional neural networks (CNNs) are used to generalize the environmental state. Q-learning algorithm enhances the ability of the dynamic obstacle avoidance and local planning of the agents in environment. The results show that, after training in different dynamic environments and testing in a new environment, the agent is able to reach the local target position successfully in unknown dynamic environment.


Sign in / Sign up

Export Citation Format

Share Document