scholarly journals Mobile Robot Path Planning using Q-Learning with Guided Distance

2018 ◽  
Vol 7 (4.27) ◽  
pp. 57
Author(s):  
Ee Soong Low ◽  
Pauline Ong ◽  
Cheng Yee Low

In path planning for mobile robot, classical Q-learning algorithm requires high iteration counts and longer time taken to achieve convergence. This is due to the beginning stage of classical Q-learning for path planning consists of mostly exploration, involving random direction decision making. This paper proposed the addition of distance aspect into direction decision making in Q-learning. This feature is used to reduce the time taken for the Q-learning to fully converge. In the meanwhile, random direction decision making is added and activated when mobile robot gets trapped in local optima. This strategy enables the mobile robot to escape from local optimal trap. The results show that the time taken for the improved Q-learning with distance guiding to converge is longer than the classical Q-learning. However, the total number of steps used is lower than the classical Q-learning. 

2019 ◽  
Vol 9 (15) ◽  
pp. 3057 ◽  
Author(s):  
Hyansu Bae ◽  
Gidong Kim ◽  
Jonguk Kim ◽  
Dianwei Qian ◽  
Sukgyu Lee

This paper proposes a noble multi-robot path planning algorithm using Deep q learning combined with CNN (Convolution Neural Network) algorithm. In conventional path planning algorithms, robots need to search a comparatively wide area for navigation and move in a predesigned formation under a given environment. Each robot in the multi-robot system is inherently required to navigate independently with collaborating with other robots for efficient performance. In addition, the robot collaboration scheme is highly depends on the conditions of each robot, such as its position and velocity. However, the conventional method does not actively cope with variable situations since each robot has difficulty to recognize the moving robot around it as an obstacle or a cooperative robot. To compensate for these shortcomings, we apply Deep q learning to strengthen the learning algorithm combined with CNN algorithm, which is needed to analyze the situation efficiently. CNN analyzes the exact situation using image information on its environment and the robot navigates based on the situation analyzed through Deep q learning. The simulation results using the proposed algorithm shows the flexible and efficient movement of the robots comparing with conventional methods under various environments.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 47824-47844 ◽  
Author(s):  
Meng Zhao ◽  
Hui Lu ◽  
Siyi Yang ◽  
Fengjuan Guo

2018 ◽  
pp. 15
Author(s):  
Muhammed E. Abd Alkhalec ◽  
Yousif Jalil Awreed ◽  
Karrar Abdulkhabeer Ali

Sign in / Sign up

Export Citation Format

Share Document