Learning to Control a Free-floating Space Robot using Deep Reinforcement Learning

Author(s):  
Desong Du ◽  
Qihang Zhou ◽  
Naiming Qi ◽  
Xu Wang ◽  
Yanfang Liu
2019 ◽  
Vol 49 (2) ◽  
pp. 024512 ◽  
Author(s):  
Shuai LIU ◽  
ShuNan WU ◽  
YuFei LIU ◽  
ZhiGang WU ◽  
ZiMing MAO

2008 ◽  
Vol 20 (3) ◽  
pp. 350-357 ◽  
Author(s):  
Kei Senda ◽  
◽  
Takayuki Kondo ◽  
Yoshimitsu Iwasaki ◽  
Shinji Fujii ◽  
...  

It is difficult for robots to achieve tasks contacting environment due to error between the controller models and the real environment. To solve this problem, we propose having a robot autonomously obtains proficient robust skills against model error. Numerical simulation and experiments using an autonomous space robot demonstrate the feasibility of our proposal in the real environment.


2021 ◽  
Vol 11 (13) ◽  
pp. 5783
Author(s):  
Haiping Ai ◽  
An Zhu ◽  
Jiajia Wang ◽  
Xiaoyan Yu ◽  
Li Chen

Aiming at addressing the problem that the joints are easily destroyed by the impact torque during the process of space robot on-orbit capturing a non-cooperative spacecraft, a reinforcement learning control algorithm combined with a compliant mechanism is proposed to achieve buffer compliance control. The compliant mechanism can not only absorb the impact energy through the deformation of its internal spring, but also limit the impact torque to a safe range by combining with the compliance control strategy. First of all, the dynamic models of the space robot and the target spacecraft before capture are obtained by using the Lagrange approach and Newton-Euler method. After that, based on the law of conservation of momentum, the constraints of kinematics and velocity, the integrated dynamic model of the post-capture hybrid system is derived. Considering the unstable hybrid system, a buffer compliance control based on reinforcement learning is proposed for the stable control. The associative search network is employed to approximate unknown nonlinear functions, an adaptive critic network is utilized to construct reinforcement signal to tune the associative search network. The numerical simulation shows that the proposed control scheme can reduce the impact torque acting on joints by 76.6% at the maximum and 58.7% at the minimum in the capturing operation phase. And in the stable control phase, the impact torque acting on the joints were limited within the safety threshold, which can avoid overload and damage of the joint actuators.


2021 ◽  
Vol 1848 (1) ◽  
pp. 012078
Author(s):  
Binyan Liang ◽  
Zhihong Chen ◽  
Meishan Guo ◽  
Yao Wang ◽  
Yanbo Wang

2020 ◽  
Vol 98 ◽  
pp. 105657 ◽  
Author(s):  
Yun-Hua Wu ◽  
Zhi-Cheng Yu ◽  
Chao-Yong Li ◽  
Meng-Jie He ◽  
Bing Hua ◽  
...  

2014 ◽  
Vol 6 ◽  
pp. 276264 ◽  
Author(s):  
Kei Senda ◽  
Yurika Tani

This paper discusses an autonomous space robot for a truss structure assembly using some reinforcement learning. It is difficult for a space robot to complete contact tasks within a real environment, for example, a peg-in-hole task, because of error between the real environment and the controller model. In order to solve problems, we propose an autonomous space robot able to obtain proficient and robust skills by overcoming error to complete a task. The proposed approach develops skills by reinforcement learning that considers plant variation, that is, modeling error. Numerical simulations and experiments show the proposed method is useful in real environments.


Sign in / Sign up

Export Citation Format

Share Document