scholarly journals A modified Q-learning algorithm for robot path planning in a digital twin assembly system

Author(s):  
Xiaowei Guo ◽  
Gongzhuang Peng ◽  
Yingying Meng
2019 ◽  
Vol 9 (15) ◽  
pp. 3057 ◽  
Author(s):  
Hyansu Bae ◽  
Gidong Kim ◽  
Jonguk Kim ◽  
Dianwei Qian ◽  
Sukgyu Lee

This paper proposes a noble multi-robot path planning algorithm using Deep q learning combined with CNN (Convolution Neural Network) algorithm. In conventional path planning algorithms, robots need to search a comparatively wide area for navigation and move in a predesigned formation under a given environment. Each robot in the multi-robot system is inherently required to navigate independently with collaborating with other robots for efficient performance. In addition, the robot collaboration scheme is highly depends on the conditions of each robot, such as its position and velocity. However, the conventional method does not actively cope with variable situations since each robot has difficulty to recognize the moving robot around it as an obstacle or a cooperative robot. To compensate for these shortcomings, we apply Deep q learning to strengthen the learning algorithm combined with CNN algorithm, which is needed to analyze the situation efficiently. CNN analyzes the exact situation using image information on its environment and the robot navigates based on the situation analyzed through Deep q learning. The simulation results using the proposed algorithm shows the flexible and efficient movement of the robots comparing with conventional methods under various environments.


2021 ◽  
Author(s):  
Xiaowei Guoa

Abstract Product assembly is an important stage in complex product manufacturing. How to intelligently plan the assembly process based on dynamic product and environment information has become an pressing issue needs to be addressed. For this reason, this research has constructed a digital twin assembly system, including virtual and real interactive feedback, data fusion analysis and decision-making iterative optimization modules. In the virtual space, a modified Q-learning algorithm is proposed to solve the path planning problem in product assembly. The proposed algorithm speeds up the convergence speed by adding dynamic reward function, optimizes the initial Q table by introducing knowledge and experience through the case-based reasoning (CBR) algorithm, and prevents entry into the trapped area through the obstacle avoiding method. Finally, take the six-joint robot UR10 as an example to verify the performance of the algorithm in the three-dimensional pathfinding space. The experimental results show that the modified Q-learning algorithm's pathfinding performance is significantly better than the original Q-learning algorithm.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 47824-47844 ◽  
Author(s):  
Meng Zhao ◽  
Hui Lu ◽  
Siyi Yang ◽  
Fengjuan Guo

2018 ◽  
pp. 15
Author(s):  
Muhammed E. Abd Alkhalec ◽  
Yousif Jalil Awreed ◽  
Karrar Abdulkhabeer Ali

2018 ◽  
Vol 7 (4.27) ◽  
pp. 57
Author(s):  
Ee Soong Low ◽  
Pauline Ong ◽  
Cheng Yee Low

In path planning for mobile robot, classical Q-learning algorithm requires high iteration counts and longer time taken to achieve convergence. This is due to the beginning stage of classical Q-learning for path planning consists of mostly exploration, involving random direction decision making. This paper proposed the addition of distance aspect into direction decision making in Q-learning. This feature is used to reduce the time taken for the Q-learning to fully converge. In the meanwhile, random direction decision making is added and activated when mobile robot gets trapped in local optima. This strategy enables the mobile robot to escape from local optimal trap. The results show that the time taken for the improved Q-learning with distance guiding to converge is longer than the classical Q-learning. However, the total number of steps used is lower than the classical Q-learning. 


Author(s):  
Conghao Jin ◽  
Yisheng Lu ◽  
Ruoting Liu ◽  
Jingwen Sun

2021 ◽  
Author(s):  
Zeli Yang ◽  
Haofeng Lu ◽  
Jiaqi Wang ◽  
Yang Li ◽  
Yifan Wang

Sign in / Sign up

Export Citation Format

Share Document