Reinforcement Learning for Robot Position/Force Control

2021 ◽  
pp. 97-118
Author(s):  
Phuong Nam Dao ◽  
Dinh Duong Pham ◽  
Xuan Khai Nguyen ◽  
Tat Chung Nguyen

2000 ◽  
Vol 14 (3) ◽  
pp. 153-168 ◽  
Author(s):  
Kazuo Kiguchi ◽  
Keigo Watanabe ◽  
Kiyotaka Izumi ◽  
Toshio Fukuda

Author(s):  
Adolfo Perrusquía ◽  
Wen Yu ◽  
Alberto Soria

Purpose The position/force control of the robot needs the parameters of the impedance model and generates the desired position from the contact force in the environment. When the environment is unknown, learning algorithms are needed to estimate both the desired force and the parameters of the impedance model. Design/methodology/approach In this paper, the authors use reinforcement learning to learn only the desired force, then they use proportional-integral-derivative admittance control to generate the desired position. The results of the experiment are presented to verify their approach. Findings The position error is minimized without knowing the environment or the impedance parameters. Another advantage of this simplified position/force control is that the transformation of the Cartesian space to the joint space by inverse kinematics is avoided by the feedback control mechanism. The stability of the closed-loop system is proven. Originality/value The position error is minimized without knowing the environment or the impedance parameters. The stability of the closed-loop system is proven.


2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Phuong Nam Dao ◽  
Duy Khanh Do ◽  
Dinh Khue Nguyen

This paper presents an adaptive reinforcement learning- (ARL-) based motion/force tracking control scheme consisting of the optimal motion dynamic control law and force control scheme for multimanipulator systems. Specifically, a new additional term and appropriate state vector are employed in designing the ARL technique for time-varying dynamical systems with online actor/critic algorithm to be established by minimizing the squared Bellman error. Additionally, the force control law is designed after obtaining the computation of constraint force coefficient by the Moore–Penrose pseudo-inverse matrix. The tracking effectiveness of the ARL-based optimal control is verified in the closed-loop system by theoretical analysis. Finally, simulation studies are conducted on a system of three manipulators to validate the physical realization of the proposed optimal tracking control design.


Author(s):  
Guanghui Liu ◽  
Lijin Fang ◽  
Bing Han ◽  
Hualiang Zhang

Purpose This paper aims to propose a hybrid force/position control algorithm based on the stiffness estimation of the unknown environment. A frequency-division control scheme is developed to improve the applicability and reliability of the robot in welding, polishing and assembly. Design/methodology/approach The stiffness estimation algorithm with time-varying forgetting factors is used to improve the speed and accuracy of the unknown environmental estimation. The sensor force control and robot position control are adopted in different frequencies to improve system stability and communication compatibility. In the low frequency of sensor force control, the Kalman state observer is used to estimate the robot’s joints information, whereas the polynomial interpolation is used to ensure the smoothness of the high frequency of robot position control. Findings Accurate force control, as well as the system stability, is attained by using this control algorithm. Practical implications The entire algorithm is applied to a six-degrees-of-freedom industrial robot, and experiments are performed to confirm its applicability. Originality/value The frequency-division control strategy guarantees the control stability and improves the smoothness of the robot movement.


Sign in / Sign up

Export Citation Format

Share Document