scholarly journals A Deep Reinforcement Learning based Homeostatic System for Unmanned Position Control

Author(s):  
Priyanthi M. Dassanayake ◽  
Ashiq Anjum ◽  
Warren Manning ◽  
Craig Bower
2019 ◽  
Vol 16 (2) ◽  
pp. 172988141983958 ◽  
Author(s):  
Guo Bingjing ◽  
Han Jianhai ◽  
Li Xiangpan ◽  
Yan Lin

A human–robot interactive control is proposed to govern the assistance provided by a lower limb exoskeleton robot to patients in the gait rehabilitation training. The rehabilitation training robot with two lower limb exoskeletons is driven by the pneumatic proportional servo system and has two rotational degrees of freedom of each lower limb. An adaptive admittance model is adopted considering its suitability for human–robot interaction. The adaptive law of the admittance parameters is designed with Sigmoid function and the reinforcement learning algorithm. Individualized admittance parameters suitable for patients are obtained by reinforcement learning. Experiments in passive and active rehabilitation training modes were carried out to verify the proposed control method. The passive rehabilitation training experimental results verify the effectiveness of the inner-loop position control strategy, which can meet the demands of gait tracking accuracy in rehabilitation training. The active rehabilitation training experimental results demonstrate that the personal adaption and active compliance are provided by the interactive controller in the robot-assistance for patients. The combined effects of flexibility of pneumatic actuators and compliance provided by the controller contribute to the training comfort, safety, and therapeutic outcome in the gait rehabilitation.


2007 ◽  
Vol 129 (5) ◽  
pp. 729-741 ◽  
Author(s):  
Mark Karpenko ◽  
Nariman Sepehri ◽  
John Anderson

In this paper, reinforcement learning is applied to coordinate, in a decentralized fashion, the motions of a pair of hydraulic actuators whose task is to firmly hold and move an object along a specified trajectory under conventional position control. The learning goal is to reduce the interaction forces acting on the object that arise due to inevitable positioning errors resulting from the imperfect closed-loop actuator dynamics. Each actuator is therefore outfitted with a reinforcement learning neural network that modifies a centrally planned formation constrained position trajectory in response to the locally measured interaction force. It is shown that the actuators, which form a multiagent learning system, can learn decentralized control strategies that reduce the object interaction forces and thus greatly improve their coordination on the manipulation task. However, the problem of credit assignment, a common difficulty in multiagent learning systems, prevents the actuators from learning control strategies where each actuator contributes equally to reducing the interaction force. This problem is resolved in this paper via the periodic communication of limited local state information between the reinforcement learning actuators. Using both simulations and experiments, this paper examines some of the issues pertaining to learning in dynamic multiagent environments and establishes reinforcement learning as a potential technique for coordinating several nonlinear hydraulic manipulators performing a common task.


2020 ◽  
Vol 53 (2) ◽  
pp. 17393-17398
Author(s):  
G. Farias ◽  
G. Garcia ◽  
G. Montenegro ◽  
E. Fabregas ◽  
S. Dormido-Canto ◽  
...  

Author(s):  
Weijun Wang ◽  
Huafeng Wu ◽  
Xianglun Kong ◽  
Yuanyuan Zhang ◽  
Yang Ye ◽  
...  

In this paper, a novel dynamic position control (PC) approach for mobile nodes (MNs) is proposed for ocean sensor networks (OSNs) which directly utilizes a neural network to represent a PC strategy. The calculation of position estimation no longer needs to be carried out in the proposed scheme, so the localization error is eliminated. In addition, reinforcement learning is used to train the PC strategy, so that the MN can learn a more highly accurate and fast response control strategy. Moreover, to verify its applicability to the real-world environment, we conducted field experiment deployment in OSNs consisting of a MN designed by us and some fixed nodes. The experimental results demonstrate the effectiveness of our proposed control scheme with impressive improvements on PC accuracy by more than 53% and response speed by more than 15%.


Information ◽  
2020 ◽  
Vol 11 (6) ◽  
pp. 310
Author(s):  
Qiuxuan Wu ◽  
Yueqin Gu ◽  
Yancheng Li ◽  
Botao Zhang ◽  
Sergey A. Chepinskiy ◽  
...  

The cable-driven soft arm is mostly made of soft material; it is difficult to control because of the material characteristics, so the traditional robot arm modeling and control methods cannot be directly applied to the soft robot arm. In this paper, we combine the data-driven modeling method with the reinforcement learning control method to realize the position control task of robotic soft arm, the method of control strategy based on deep Q learning. In order to solve slow convergence and unstable effect in the process of simulation and migration when deep reinforcement learning is applied to the actual robot control task, a control strategy learning method is designed, which is based on the experimental data, to establish a simulation environment for control strategy training, and then applied to the real environment. Finally, it is proved by experiment that the method can effectively complete the control of the soft robot arm, which has better robustness than the traditional method.


Sign in / Sign up

Export Citation Format

Share Document