Adaptive Reinforcement Learning Motion/Force Control of Multiple Uncertain Manipulators

Author(s):  
Phuong Nam Dao ◽  
Dinh Duong Pham ◽  
Xuan Khai Nguyen ◽  
Tat Chung Nguyen
Author(s):  
Adolfo Perrusquía ◽  
Wen Yu ◽  
Alberto Soria

Purpose The position/force control of the robot needs the parameters of the impedance model and generates the desired position from the contact force in the environment. When the environment is unknown, learning algorithms are needed to estimate both the desired force and the parameters of the impedance model. Design/methodology/approach In this paper, the authors use reinforcement learning to learn only the desired force, then they use proportional-integral-derivative admittance control to generate the desired position. The results of the experiment are presented to verify their approach. Findings The position error is minimized without knowing the environment or the impedance parameters. Another advantage of this simplified position/force control is that the transformation of the Cartesian space to the joint space by inverse kinematics is avoided by the feedback control mechanism. The stability of the closed-loop system is proven. Originality/value The position error is minimized without knowing the environment or the impedance parameters. The stability of the closed-loop system is proven.


2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Phuong Nam Dao ◽  
Duy Khanh Do ◽  
Dinh Khue Nguyen

This paper presents an adaptive reinforcement learning- (ARL-) based motion/force tracking control scheme consisting of the optimal motion dynamic control law and force control scheme for multimanipulator systems. Specifically, a new additional term and appropriate state vector are employed in designing the ARL technique for time-varying dynamical systems with online actor/critic algorithm to be established by minimizing the squared Bellman error. Additionally, the force control law is designed after obtaining the computation of constraint force coefficient by the Moore–Penrose pseudo-inverse matrix. The tracking effectiveness of the ARL-based optimal control is verified in the closed-loop system by theoretical analysis. Finally, simulation studies are conducted on a system of three manipulators to validate the physical realization of the proposed optimal tracking control design.


Sign in / Sign up

Export Citation Format

Share Document