scholarly journals Motion Planning for Industrial Robots using Reinforcement Learning

Procedia CIRP ◽  
2017 ◽  
Vol 63 ◽  
pp. 107-112 ◽  
Author(s):  
Richard Meyes ◽  
Hasan Tercan ◽  
Simon Roggendorf ◽  
Thomas Thiele ◽  
Christian Büscher ◽  
...  
2021 ◽  
Author(s):  
Qiang Li ◽  
Jun Nie ◽  
Haixia Wang ◽  
Xiao Lu ◽  
Shibin Song

2014 ◽  
Vol 7 ◽  
Author(s):  
Mikhail Frank ◽  
Jürgen Leitner ◽  
Marijn Stollenga ◽  
Alexander Förster ◽  
Jürgen Schmidhuber

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Ning Yu ◽  
Lin Nan ◽  
Tao Ku

Purpose How to make accurate action decisions based on visual information is one of the important research directions of industrial robots. The purpose of this paper is to design a highly optimized hand-eye coordination model of the robot to improve the robots’ on-site decision-making ability. Design/methodology/approach The combination of inverse reinforcement learning (IRL) algorithm and generative adversarial network can effectively reduce the dependence on expert samples and robots can obtain the decision-making performance that the degree of optimization is not lower than or even higher than that of expert samples. Findings The performance of the proposed model is verified in the simulation environment and real scene. By monitoring the reward distribution of the reward function and the trajectory of the robot, the proposed model is compared with other existing methods. The experimental results show that the proposed model has better decision-making performance in the case of less expert data. Originality/value A robot hand-eye cooperation model based on improved IRL is proposed and verified. Empirical investigations on real experiments reveal that overall, the proposed approach tends to improve the real efficiency by more than 10% when compared to alternative hand-eye cooperation methods.


2021 ◽  
pp. 318-329
Author(s):  
Nikodem Pankiewicz ◽  
Tomasz Wrona ◽  
Wojciech Turlej ◽  
Mateusz Orłowski

2019 ◽  
Vol 38 ◽  
pp. 1508-1515 ◽  
Author(s):  
N. Arana-Arexolaleiba ◽  
N. Urrestilla-Anguiozar ◽  
D. Chrysostomou ◽  
S. Bøgh

Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 1890 ◽  
Author(s):  
Zijian Hu ◽  
Kaifang Wan ◽  
Xiaoguang Gao ◽  
Yiwei Zhai ◽  
Qianglong Wang

Autonomous motion planning (AMP) of unmanned aerial vehicles (UAVs) is aimed at enabling a UAV to safely fly to the target without human intervention. Recently, several emerging deep reinforcement learning (DRL) methods have been employed to address the AMP problem in some simplified environments, and these methods have yielded good results. This paper proposes a multiple experience pools (MEPs) framework leveraging human expert experiences for DRL to speed up the learning process. Based on the deep deterministic policy gradient (DDPG) algorithm, a MEP–DDPG algorithm was designed using model predictive control and simulated annealing to generate expert experiences. On applying this algorithm to a complex unknown simulation environment constructed based on the parameters of the real UAV, the training experiment results showed that the novel DRL algorithm resulted in a performance improvement exceeding 20% as compared with the state-of-the-art DDPG. The results of the experimental testing indicate that UAVs trained using MEP–DDPG can stably complete a variety of tasks in complex, unknown environments.


Sign in / Sign up

Export Citation Format

Share Document