scholarly journals Reinforcement Learning and Control of a Lower Extremity Exoskeleton for Squat Assistance

2021 ◽  
Vol 8 ◽  
Author(s):  
Shuzhen Luo ◽  
Ghaith Androwis ◽  
Sergei Adamovich ◽  
Hao Su ◽  
Erick Nunez ◽  
...  

A significant challenge for the control of a robotic lower extremity rehabilitation exoskeleton is to ensure stability and robustness during programmed tasks or motions, which is crucial for the safety of the mobility-impaired user. Due to various levels of the user’s disability, the human-exoskeleton interaction forces and external perturbations are unpredictable and could vary substantially and cause conventional motion controllers to behave unreliably or the robot to fall down. In this work, we propose a new, reinforcement learning-based, motion controller for a lower extremity rehabilitation exoskeleton, aiming to perform collaborative squatting exercises with efficiency, stability, and strong robustness. Unlike most existing rehabilitation exoskeletons, our exoskeleton has ankle actuation on both sagittal and front planes and is equipped with multiple foot force sensors to estimate center of pressure (CoP), an important indicator of system balance. This proposed motion controller takes advantage of the CoP information by incorporating it in the state input of the control policy network and adding it to the reward during the learning to maintain a well balanced system state during motions. In addition, we use dynamics randomization and adversary force perturbations including large human interaction forces during the training to further improve control robustness. To evaluate the effectiveness of the learning controller, we conduct numerical experiments with different settings to demonstrate its remarkable ability on controlling the exoskeleton to repetitively perform well balanced and robust squatting motions under strong perturbations and realistic human interaction forces.

2021 ◽  
Author(s):  
Shuzhen Luo ◽  
Ghaith Androwis ◽  
Sergei Adamovich ◽  
Erick Nunez ◽  
Hao Su ◽  
...  

Abstract Background: Few studies have systematically investigated robust controllers for lower limb rehabilitation exoskeletons (LLREs) that can safely and effectively assist users with a variety of neuromuscular disorders to walk with full autonomy. One of the key challenges for developing such a robust controller is to handle different degrees of uncertain human-exoskeleton interaction forces from the patients. Consequently, conventional walking controllers either are patient-condition specific or involve tuning of many control parameters, which could behave unreliably and even fail to maintain balance. Methods: We present a novel and robust controller for a LLRE based on a decoupled deep reinforcement learning framework with three independent networks, which aims to provide reliable walking assistance against various and uncertain human-exoskeleton interaction forces. The exoskeleton controller is driven by a neural network control policy that acts on a stream of the LLRE’s proprioceptive signals, including joint kinematic states, and subsequently predicts real-time position control targets for the actuated joints. To handle uncertain human-interaction forces, the control policy is trained intentionally with an integrated human musculoskeletal model and realistic human-exoskeleton interaction forces. Two other neural networks are connected with the control policy network to predict the interaction forces and muscle coordination. To further increase the robustness of the control policy, we employ domain randomization during training that includes not only randomization of exoskeleton dynamics properties but, more importantly, randomization of human muscle strength to simulate the variability of the patient’s disability. Through this decoupled deep reinforcement learning framework, the trained controller of LLREs is able to provide reliable walking assistance to the human with different degrees of neuromuscular disorders. Results and Conclusion: A universal, RL-based walking controller is trained and virtually tested on a LLRE system to verify its effectiveness and robustness in assisting users with different disabilities such as passive muscles (quadriplegic), muscle weakness, or hemiplegic conditions. An ablation study demonstrates strong robustness of the control policy under large exoskeleton dynamic property ranges and various human-exoskeleton interaction forces. The decoupled network structure allows us to isolate the LLRE control policy network for testing and sim-to-real transfer since it uses only proprioception information of the LLRE (joint sensory state) as the input. Furthermore, the controller is shown to be able to handle different patient conditions without the need for patient-specific control parameters tuning.


2021 ◽  
Vol 11 (18) ◽  
pp. 8419
Author(s):  
Jiang Zhao ◽  
Jiaming Sun ◽  
Zhihao Cai ◽  
Longhong Wang ◽  
Yingxun Wang

To achieve the perception-based autonomous control of UAVs, schemes with onboard sensing and computing are popular in state-of-the-art work, which often consist of several separated modules with respective complicated algorithms. Most methods depend on handcrafted designs and prior models with little capacity for adaptation and generalization. Inspired by the research on deep reinforcement learning, this paper proposes a new end-to-end autonomous control method to simplify the separate modules in the traditional control pipeline into a single neural network. An image-based reinforcement learning framework is established, depending on the design of the network architecture and the reward function. Training is performed with model-free algorithms developed according to the specific mission, and the control policy network can map the input image directly to the continuous actuator control command. A simulation environment for the scenario of UAV landing was built. In addition, the results under different typical cases, including both the small and large initial lateral or heading angle offsets, show that the proposed end-to-end method is feasible for perception-based autonomous control.


Biomimetics ◽  
2021 ◽  
Vol 6 (1) ◽  
pp. 13
Author(s):  
Adam Bignold ◽  
Francisco Cruz ◽  
Richard Dazeley ◽  
Peter Vamplew ◽  
Cameron Foale

Interactive reinforcement learning methods utilise an external information source to evaluate decisions and accelerate learning. Previous work has shown that human advice could significantly improve learning agents’ performance. When evaluating reinforcement learning algorithms, it is common to repeat experiments as parameters are altered or to gain a sufficient sample size. In this regard, to require human interaction every time an experiment is restarted is undesirable, particularly when the expense in doing so can be considerable. Additionally, reusing the same people for the experiment introduces bias, as they will learn the behaviour of the agent and the dynamics of the environment. This paper presents a methodology for evaluating interactive reinforcement learning agents by employing simulated users. Simulated users allow human knowledge, bias, and interaction to be simulated. The use of simulated users allows the development and testing of reinforcement learning agents, and can provide indicative results of agent performance under defined human constraints. While simulated users are no replacement for actual humans, they do offer an affordable and fast alternative for evaluative assisted agents. We introduce a method for performing a preliminary evaluation utilising simulated users to show how performance changes depending on the type of user assisting the agent. Moreover, we describe how human interaction may be simulated, and present an experiment illustrating the applicability of simulating users in evaluating agent performance when assisted by different types of trainers. Experimental results show that the use of this methodology allows for greater insight into the performance of interactive reinforcement learning agents when advised by different users. The use of simulated users with varying characteristics allows for evaluation of the impact of those characteristics on the behaviour of the learning agent.


2021 ◽  
pp. 2150011
Author(s):  
Wei Dong ◽  
Jianan Wang ◽  
Chunyan Wang ◽  
Zhenqiang Qi ◽  
Zhengtao Ding

In this paper, the optimal consensus control problem is investigated for heterogeneous linear multi-agent systems (MASs) with spanning tree condition based on game theory and reinforcement learning. First, the graphical minimax game algebraic Riccati equation (ARE) is derived by converting the consensus problem into a zero-sum game problem between each agent and its neighbors. The asymptotic stability and minimax validation of the closed-loop systems are proved theoretically. Then, a data-driven off-policy reinforcement learning algorithm is proposed to online learn the optimal control policy without the information of the system dynamics. A certain rank condition is established to guarantee the convergence of the proposed algorithm to the unique solution of the ARE. Finally, the effectiveness of the proposed method is demonstrated through a numerical simulation.


2011 ◽  
Vol 46 (6) ◽  
pp. 607-614 ◽  
Author(s):  
Kelly L. McMullen ◽  
Nicole L. Cosby ◽  
Jay Hertel ◽  
Christopher D. Ingersoll ◽  
Joseph M. Hart

Context: Fatigue of the gluteus medius (GMed) muscle might be associated with decreases in postural control due to insufficient pelvic stabilization. Men and women might have different muscular recruitment patterns in response to GMed fatigue. Objective: To compare postural control and quality of movement between men and women after a fatiguing hip-abduction exercise. Design: Descriptive laboratory study. Setting: Controlled laboratory. Patients or Other Participants: Eighteen men (age = 22 ± 3.64 years, height = 183.37 ± 8.30 cm, mass = 87.02 ±12.53 kg) and 18 women (age = 22 ± 3.14, height = 167.65 ± 5.80 cm, mass = 66.64 ± 10.49 kg) with no history of low back or lower extremity injury participated in our study. Intervention(s): Participants followed a fatiguing protocol that involved a side-lying hip-abduction exercise performed until a 15% shift in electromyographic median frequency of the GMed was reached. Main Outcome Measure(s): Baseline and postfatigue measurements of single-leg static balance, dynamic balance, and quality of movement assessed with center-of-pressure measurements, the Star Excursion Balance Test, and lateral step-down test, respectively, were recorded for the dominant lower extremity (as identified by the participant). Results: We observed no differences in balance deficits between sexes (P > .05); however, we found main effects for time with all of our postfatigue outcome measures (P ≤ .05). Conclusions: Our findings suggest that postural control and quality of movement were affected negatively after a GMed-fatiguing exercise. At similar levels of local muscle fatigue, men and women had similar measurements of postural control.


2020 ◽  
Vol 29 (2) ◽  
pp. 174-178
Author(s):  
Kelly M. Meiners ◽  
Janice K. Loudon

Purpose/Background: Various methods are available for assessment of static and dynamic postural stability. The primary purpose of this study was to investigate the relationship between dynamic postural stability as measured by the Star Excursion Balance Test (SEBT) and static postural sway assessment as measured by the TechnoBody™ Pro-Kin in female soccer players. A secondary purpose was to determine side-to-side symmetry in this cohort. Methods: A total of 18 female soccer players completed testing on the SEBT and Technobody™ Pro-Kin balance device. Outcome measures were anterior, posterior medial, and posterior lateral reaches from the SEBT and center of pressure in the x- and y-axes as well as SD of movement in the forward/backward and medial/lateral directions from the force plate on left and right legs. Bivariate correlations were determined between the 8 measures. In addition, paired Wilcoxon signed-rank tests were performed to determine similarity between limb scores. Results: All measures on both the SEBT and postural sway assessment were significantly correlated when comparing dominant with nondominant lower-extremities with the exception of SD of movement in both x- and y-axes. When correlating results of the SEBT with postural sway assessment, a significant correlation was found between the SEBT right lower-extremity posterior lateral reach (r = .567, P < .05) and summed SEBT (r = .486, P < .05) and the center of pressure in the y-axis. A significant correlation was also found on the left lower-extremity, with SD of forward/backward movement and SEBT posterior medial reach (r = −.511, P < .05). Conclusions: Dynamic postural tests and static postural tests provide different information to the overall assessment of balance in female soccer players. Relationship between variables differed based on the subject’s lower-extremity dominance.


Author(s):  
Peng Zhang ◽  
Jianye Hao ◽  
Weixun Wang ◽  
Hongyao Tang ◽  
Yi Ma ◽  
...  

Reinforcement learning agents usually learn from scratch, which requires a large number of interactions with the environment. This is quite different from the learning process of human. When faced with a new task, human naturally have the common sense and use the prior knowledge to derive an initial policy and guide the learning process afterwards. Although the prior knowledge may be not fully applicable to the new task, the learning process is significantly sped up since the initial policy ensures a quick-start of learning and intermediate guidance allows to avoid unnecessary exploration. Taking this inspiration, we propose knowledge guided policy network (KoGuN), a novel framework that combines human prior suboptimal knowledge with reinforcement learning. Our framework consists of a fuzzy rule controller to represent human knowledge and a refine module to finetune suboptimal prior knowledge. The proposed framework is end-to-end and can be combined with existing policy-based reinforcement learning algorithm. We conduct experiments on several control tasks. The empirical results show that our approach, which combines suboptimal human knowledge and RL, achieves significant improvement on learning efficiency of flat RL algorithms, even with very low-performance human prior knowledge.


2021 ◽  
Vol 33 (1) ◽  
pp. 129-156
Author(s):  
Masami Iwamoto ◽  
Daichi Kato

This letter proposes a new idea to improve learning efficiency in reinforcement learning (RL) with the actor-critic method used as a muscle controller for posture stabilization of the human arm. Actor-critic RL (ACRL) is used for simulations to realize posture controls in humans or robots using muscle tension control. However, it requires very high computational costs to acquire a better muscle control policy for desirable postures. For efficient ACRL, we focused on embodiment that is supposed to potentially achieve efficient controls in research fields of artificial intelligence or robotics. According to the neurophysiology of motion control obtained from experimental studies using animals or humans, the pedunculopontine tegmental nucleus (PPTn) induces muscle tone suppression, and the midbrain locomotor region (MLR) induces muscle tone promotion. PPTn and MLR modulate the activation levels of mutually antagonizing muscles such as flexors and extensors in a process through which control signals are translated from the substantia nigra reticulata to the brain stem. Therefore, we hypothesized that the PPTn and MLR could control muscle tone, that is, the maximum values of activation levels of mutually antagonizing muscles using different sigmoidal functions for each muscle; then we introduced antagonism function models (AFMs) of PPTn and MLR for individual muscles, incorporating the hypothesis into the process to determine the activation level of each muscle based on the output of the actor in ACRL. ACRL with AFMs representing the embodiment of muscle tone successfully achieved posture stabilization in five joint motions of the right arm of a human adult male under gravity in predetermined target angles at an earlier period of learning than the learning methods without AFMs. The results obtained from this study suggest that the introduction of embodiment of muscle tone can enhance learning efficiency in posture stabilization disorders of humans or humanoid robots.


Sign in / Sign up

Export Citation Format

Share Document