scholarly journals Deep reinforcement learning for modeling human locomotion control in neuromechanical simulation

Author(s):  
Seungmoon Song ◽  
Łukasz Kidziński ◽  
Xue Bin Peng ◽  
Carmichael Ong ◽  
Jennifer Hicks ◽  
...  

AbstractModeling human motor control and predicting how humans will move in novel environments is a grand scientific challenge. Researchers in the fields of biomechanics and motor control have proposed and evaluated motor control models via neuromechanical simulations, which produce physically correct motions of a musculoskeletal model. Typically, researchers have developed control models that encode physiologically plausible motor control hypotheses and compared the resulting simulation behaviors to measurable human motion data. While such plausible control models were able to simulate and explain many basic locomotion behaviors (e.g. walking, running, and climbing stairs), modeling higher layer controls (e.g. processing environment cues, planning long-term motion strategies, and coordinating basic motor skills to navigate in dynamic and complex environments) remains a challenge. Recent advances in deep reinforcement learning lay a foundation for modeling these complex control processes and controlling a diverse repertoire of human movement; however, reinforcement learning has been rarely applied in neuromechanical simulation to model human control. In this paper, we review the current state of neuromechanical simulations, along with the fundamentals of reinforcement learning, as it applies to human locomotion. We also present a scientific competition and accompanying software platform, which we have organized to accelerate the use of reinforcement learning in neuromechanical simulations. This “Learn to Move” competition was an official competition at the NeurIPS conference from 2017 to 2019 and attracted over 1300 teams from around the world. Top teams adapted state-of-the-art deep reinforcement learning techniques and produced motions, such as quick turning and walk-to-stand transitions, that have not been demonstrated before in neuromechanical simulations without utilizing reference motion data. We close with a discussion of future opportunities at the intersection of human movement simulation and reinforcement learning and our plans to extend the Learn to Move competition to further facilitate interdisciplinary collaboration in modeling human motor control for biomechanics and rehabilitation research

2020 ◽  
Author(s):  
Seungmoon Song ◽  
Łukasz Kidziński ◽  
Xue Bin Peng ◽  
Carmichael Ong ◽  
Jennifer Hicks ◽  
...  

AbstractModeling human motor control and predicting how humans will move in novel environments is a grand scientific challenge. Despite advances in neuroscience techniques, it is still difficult to measure and interpret the activity of the millions of neurons involved in motor control. Thus, researchers in the fields of biomechanics and motor control have proposed and evaluated motor control models via neuromechanical simulations, which produce physically correct motions of a musculoskeletal model. Typically, researchers have developed control models that encode physiologically plausible motor control hypotheses and compared the resulting simulation behaviors to measurable human motion data. While such plausible control models were able to simulate and explain many basic locomotion behaviors (e.g. walking, running, and climbing stairs), modeling higher layer controls (e.g. processing environment cues, planning long-term motion strategies, and coordinating basic motor skills to navigate in dynamic and complex environments) remains a challenge. Recent advances in deep reinforcement learning lay a foundation for modeling these complex control processes and controlling a diverse repertoire of human movement; however, reinforcement learning has been rarely applied in neuromechanical simulation to model human control. In this paper, we review the current state of neuromechanical simulations, along with the fundamentals of reinforcement learning, as it applies to human locomotion. We also present a scientific competition and accompanying software platform, which we have organized to accelerate the use of reinforcement learning in neuromechanical simulations. This “Learn to Move” competition, which we have run annually since 2017 at the NeurIPS conference, has attracted over 1300 teams from around the world. Top teams adapted state-of-art deep reinforcement learning techniques to produce complex motions, such as quick turning and walk-to-stand transitions, that have not been demonstrated before in neuromechanical simulations without utilizing reference motion data. We close with a discussion of future opportunities at the intersection of human movement simulation and reinforcement learning and our plans to extend the Learn to Move competition to further facilitate interdisciplinary collaboration in modeling human motor control for biomechanics and rehabilitation research.


2019 ◽  
Author(s):  
N. Boulanger ◽  
F. Buisseret ◽  
V. Dehouck ◽  
F. Dierick ◽  
O. White

AbstractNatural human movements are stereotyped. They minimise cost functions that include energy, a natural candidate from mechanical and physiological points of view. In time-changing environments, however, motor strategies are modified since energy is no longer conserved. Adiabatic invariants are relevant observables in such cases, although they have not been investigated in human motor control so far. We fill this gap and show that the theory of adiabatic invariants explains how humans move when gravity varies.


Author(s):  
Hideyuki Kimpara ◽  
Kenechukwu C. Mbanisi ◽  
Zhi Li ◽  
Karen L. Troy ◽  
Danil Prokhorov ◽  
...  

Objective To investigate the effects of human force anticipation, we conducted an experimental load-pushing task with diverse combinations of informed and actual loading weights. Background Human motor control tends to rely upon the anticipated workload to plan the force to exert, particularly in fast tasks such as pushing objects in less than 1 s. The motion and force responses in such tasks may depend on the anticipated resistive forces, based on a learning process. Method Pushing performances of 135 trials were obtained from 9 participants. We varied the workload by changing the masses from 0.2 to 5 kg. To influence anticipation, participants were shown a display of the workload that was either correct or incorrect. We collected the motion and force data, as well as electromyography (EMG) signals from the actively used muscle groups. Results Overanticipation produced overshoot performances in more than 80% of trials. Lighter actual workloads were also associated with overshoot. Pushing behaviors with heavier workloads could be classified into feedforward-dominant and feedback-dominant responses based on the timing of force, motion, and EMG responses. In addition, we found that the preceding trial condition affected the performance of the subsequent trial. Conclusion Our results show that the first peak of the pushing force increases consistently with anticipatory workload. Application This study improves our understanding of human motion control and can be applied to situations such as simulating interactions between drivers and assistive systems in intelligent vehicles.


2021 ◽  
Vol 18 (2) ◽  
pp. 172988142199858
Author(s):  
Gianpaolo Gulletta ◽  
Eliana Costa e Silva ◽  
Wolfram Erlhagen ◽  
Ruud Meulenbroek ◽  
Maria Fernanda Pires Costa ◽  
...  

As robots are starting to become part of our daily lives, they must be able to cooperate in a natural and efficient manner with humans to be socially accepted. Human-like morphology and motion are often considered key features for intuitive human–robot interactions because they allow human peers to easily predict the final intention of a robotic movement. Here, we present a novel motion planning algorithm, the Human-like Upper-limb Motion Planner, for the upper limb of anthropomorphic robots, that generates collision-free trajectories with human-like characteristics. Mainly inspired from established theories of human motor control, the planning process takes into account a task-dependent hierarchy of spatial and postural constraints modelled as cost functions. For experimental validation, we generate arm-hand trajectories in a series of tasks including simple point-to-point reaching movements and sequential object-manipulation paradigms. Being a major contribution to the current literature, specific focus is on the kinematics of naturalistic arm movements during the avoidance of obstacles. To evaluate human-likeness, we observe kinematic regularities and adopt smoothness measures that are applied in human motor control studies to distinguish between well-coordinated and impaired movements. The results of this study show that the proposed algorithm is capable of planning arm-hand movements with human-like kinematic features at a computational cost that allows fluent and efficient human–robot interactions.


Sign in / Sign up

Export Citation Format

Share Document