scholarly journals Sample Efficient Learning of Path Following and Obstacle Avoidance Behavior for Quadrotors

2018 ◽  
Vol 3 (4) ◽  
pp. 3852-3859 ◽  
Author(s):  
Stefan Stevsic ◽  
Tobias Nageli ◽  
Javier Alonso-Mora ◽  
Otmar Hilliges
2014 ◽  
Vol 20 (10) ◽  
pp. 1751-1756 ◽  
Author(s):  
Byambaa Dorj ◽  
Doopalam Tuvshinjargal ◽  
KilTo Chong ◽  
Dong Pyo Hong ◽  
Deok Jin Lee

2016 ◽  
Vol 39 (8) ◽  
pp. 1236-1252 ◽  
Author(s):  
Basant Kumar Sahu ◽  
Bidyadhar Subudhi

This paper presents the development of simple but powerful path-following and obstacle-avoidance control laws for an underactuated autonomous underwater vehicle (AUV). Potential function-based proportional derivative (PFPD) as well as a potential function-based augmented proportional derivative (PFAPD) control laws are developed to govern the motion of the AUV in an obstacle-rich environment. For obstacle avoidance, a mathematical potential function is used, which formulates the repulsive force between the AUV and the solid obstacles intersecting the desired path. Numerical simulations are carried out to study the efficacy of the proposed controllers and the results are observed. To reduce the values of the overshoots and steady-state errors identified due to the application of PFPD controller a PFAPD controller is designed that drives the AUV along the desired trajectory. From the simulation results, it is observed that the proposed controllers are able to drive the AUV to track the desired path, avoiding the obstacles in an obstacle-rich environment. The results are compared and it is observed that the PFAPD outperforms the PFPD to drive the AUV along the desired trajectory. It is also proved that it is not necessary to employ highly complicated controllers for solving obstacle-avoidance and path-following problems of underactuated AUVs. These problems can be solved with the application of PFAPD controllers.


2019 ◽  
Vol 49 (10) ◽  
pp. 1343-1352
Author(s):  
Yankai SHEN ◽  
Qinan LUO ◽  
Chen WEI ◽  
Haibin DUAN ◽  
Yimin DENG

2021 ◽  
Vol 103 (4) ◽  
Author(s):  
Bartomeu Rubí ◽  
Bernardo Morcego ◽  
Ramon Pérez

AbstractA deep reinforcement learning approach for solving the quadrotor path following and obstacle avoidance problem is proposed in this paper. The problem is solved with two agents: one for the path following task and another one for the obstacle avoidance task. A novel structure is proposed, where the action computed by the obstacle avoidance agent becomes the state of the path following agent. Compared to traditional deep reinforcement learning approaches, the proposed method allows to interpret the training process outcomes, is faster and can be safely trained on the real quadrotor. Both agents implement the Deep Deterministic Policy Gradient algorithm. The path following agent was developed in a previous work. The obstacle avoidance agent uses the information provided by a low-cost LIDAR to detect obstacles around the vehicle. Since LIDAR has a narrow field-of-view, an approach for providing the agent with a memory of the previously seen obstacles is developed. A detailed description of the process of defining the state vector, the reward function and the action of this agent is given. The agents are programmed in python/tensorflow and are trained and tested in the RotorS/gazebo platform. Simulations results prove the validity of the proposed approach.


Drones ◽  
2021 ◽  
Vol 5 (3) ◽  
pp. 74
Author(s):  
Xingyu Liu ◽  
Xiaojia Xiang ◽  
Yuan Chang ◽  
Chao Yan ◽  
Han Zhou ◽  
...  

Flocking navigation, involving alignment-guaranteed path following and collision avoidance against obstacles, remains to be a challenging task for drones. In this paper, we investigate how to implement flocking navigation when only one drone in the swarm masters the predetermined path, instead of all drones mastering their routes. Specifically, this paper proposes a hierarchical weighting Vicsek model (WVEM), which consists of a hierarchical weighting mechanism and a layer regulation mechanism. Based on the hierarchical mechanism, all drones are divided into three layers and the drones at different layers are assigned with different weights to guarantee the convergence speed of alignment. The layer regulation mechanism is developed to realize a more flexible obstacle avoidance. We analyze the influence of the WVEM parameters such as weighting value and interaction radius, and demonstrate the flocking navigation performance through a series of simulation experiments.


Sign in / Sign up

Export Citation Format

Share Document