scholarly journals Adaptive low-level control of autonomous underwater vehicles using deep reinforcement learning

2018 ◽  
Vol 107 ◽  
pp. 71-86 ◽  
Author(s):  
Ignacio Carlucho ◽  
Mariano De Paula ◽  
Sen Wang ◽  
Yvan Petillot ◽  
Gerardo G. Acosta
2018 ◽  
Vol 212 (1) ◽  
pp. 105-123
Author(s):  
Tomasz Praczyk ◽  
Piotr Szymak ◽  
Krzysztof Naus ◽  
Leszek Pietrukaniec ◽  
Stanisław Hożyń

Abstract The paper presents the first part of the final report on all the experiments with biomimetic autono-mous underwater vehicle (BAUV) performed within the confines of the project entitled ‘Autonomous underwater vehicles with silent undulating propulsion for underwater ISR’, financed by Polish National Center of Research and Development. The report includes experiments in the swimming pool as well as in real conditions, that is, both in a lake and in the sea. The tests presented in this part of the final report were focused on low-level control.


2021 ◽  
Vol 5 (4) ◽  
pp. 1-24
Author(s):  
Siddharth Mysore ◽  
Bassel Mabsout ◽  
Kate Saenko ◽  
Renato Mancuso

We focus on the problem of reliably training Reinforcement Learning (RL) models (agents) for stable low-level control in embedded systems and test our methods on a high-performance, custom-built quadrotor platform. A common but often under-studied problem in developing RL agents for continuous control is that the control policies developed are not always smooth. This lack of smoothness can be a major problem when learning controllers as it can result in control instability and hardware failure. Issues of noisy control are further accentuated when training RL agents in simulation due to simulators ultimately being imperfect representations of reality—what is known as the reality gap . To combat issues of instability in RL agents, we propose a systematic framework, REinforcement-based transferable Agents through Learning (RE+AL), for designing simulated training environments that preserve the quality of trained agents when transferred to real platforms. RE+AL is an evolution of the Neuroflight infrastructure detailed in technical reports prepared by members of our research group. Neuroflight is a state-of-the-art framework for training RL agents for low-level attitude control. RE+AL improves and completes Neuroflight by solving a number of important limitations that hindered the deployment of Neuroflight to real hardware. We benchmark RE+AL on the NF1 racing quadrotor developed as part of Neuroflight. We demonstrate that RE+AL significantly mitigates the previously observed issues of smoothness in RL agents. Additionally, RE+AL is shown to consistently train agents that are flight capable and with minimal degradation in controller quality upon transfer. RE+AL agents also learn to perform better than a tuned PID controller, with better tracking errors, smoother control, and reduced power consumption. To the best of our knowledge, RE+AL agents are the first RL-based controllers trained in simulation to outperform a well-tuned PID controller on a real-world controls problem that is solvable with classical control.


2019 ◽  
Vol 4 (4) ◽  
pp. 4224-4230 ◽  
Author(s):  
Nathan O. Lambert ◽  
Daniel S. Drew ◽  
Joseph Yaconelli ◽  
Sergey Levine ◽  
Roberto Calandra ◽  
...  

2021 ◽  
Vol 7 ◽  
Author(s):  
Simen Theie Havenstrøm ◽  
Adil Rasheed ◽  
Omer San

Control theory provides engineers with a multitude of tools to design controllers that manipulate the closed-loop behavior and stability of dynamical systems. These methods rely heavily on insights into the mathematical model governing the physical system. However, in complex systems, such as autonomous underwater vehicles performing the dual objective of path following and collision avoidance, decision making becomes nontrivial. We propose a solution using state-of-the-art Deep Reinforcement Learning (DRL) techniques to develop autonomous agents capable of achieving this hybrid objective without having a priori knowledge about the goal or the environment. Our results demonstrate the viability of DRL in path following and avoiding collisions towards achieving human-level decision making in autonomous vehicle systems within extreme obstacle configurations.


Sign in / Sign up

Export Citation Format

Share Document