Locomotion Control of a Hybrid Propulsion Biomimetic Underwater Vehicle via Deep Reinforcement Learning

Author(s):  
Tiandong Zhang ◽  
Rui Wang ◽  
Yu Wang ◽  
Shuo Wang
2019 ◽  
pp. 027836491985944 ◽  
Author(s):  
David Surovik ◽  
Kun Wang ◽  
Massimo Vespignani ◽  
Jonathan Bruce ◽  
Kostas E Bekris

Tensegrity robots, which are prototypical examples of hybrid soft–rigid robots, exhibit dynamical properties that provide ruggedness and adaptability. They also bring about, however, major challenges for locomotion control. Owing to high dimensionality and the complex evolution of contact states, data-driven approaches are appropriate for producing viable feedback policies for tensegrities. Guided policy search (GPS), a sample-efficient hybrid framework for optimization and reinforcement learning, has previously been applied to generate periodic, axis-constrained locomotion by an icosahedral tensegrity on flat ground. Varying environments and tasks, however, create a need for more adaptive and general locomotion control that actively utilizes an expanded space of robot states. This implies significantly higher needs in terms of sample data and setup effort. This work mitigates such requirements by proposing a new GPS -based reinforcement learning pipeline, which exploits the vehicle’s high degree of symmetry and appropriately learns contextual behaviors that are sustainable without periodicity. Newly achieved capabilities include axially unconstrained rolling, rough terrain traversal, and rough incline ascent. These tasks are evaluated for a small variety of key model parameters in simulation and tested on the NASA hardware prototype, SUPERball. Results confirm the utility of symmetry exploitation and the adaptability of the vehicle. They also shed light on numerous strengths and limitations of the GPS framework for policy design and transfer to real hybrid soft–rigid robots.


2021 ◽  
Author(s):  
Ricardo B. Grando ◽  
Junior C. de Jesus ◽  
Victor A. Kich ◽  
Alisson H. Kolling ◽  
Nicolas P. Bortoluzzi ◽  
...  

Author(s):  
Zhuo Wang ◽  
Shiwei Zhang ◽  
Xiaoning Feng ◽  
Yancheng Sui

The environmental adaptability of autonomous underwater vehicles is always a problem for its path planning. Although reinforcement learning can improve the environmental adaptability, the slow convergence of reinforcement learning is caused by multi-behavior coupling, so it is difficult for autonomous underwater vehicle to avoid moving obstacles. This article proposes a multi-behavior critic reinforcement learning algorithm applied to autonomous underwater vehicle path planning to overcome problems associated with oscillating amplitudes and low learning efficiency in the early stages of training which are common in traditional actor–critic algorithms. Behavior critic reinforcement learning assesses the actions of the actor from perspectives such as energy saving and security, combining these aspects into a whole evaluation of the actor. In this article, the policy gradient method is selected as the actor part, and the value function method is selected as the critic part. The strategy gradient and the value function methods for actor and critic, respectively, are approximated by a backpropagation neural network, the parameters of which are updated using the gradient descent method. The simulation results show that the method has the ability of optimizing learning in the environment and can improve learning efficiency, which meets the needs of real time and adaptability for autonomous underwater vehicle dynamic obstacle avoidance.


Sign in / Sign up

Export Citation Format

Share Document