scholarly journals Deep and Reinforcement Learning in Perception and Control for Autonomous Aerial Robots

2020 ◽  
Author(s):  
Alejandro Rodríguez Ramos
2009 ◽  
Vol 129 (4) ◽  
pp. 363-367
Author(s):  
Tomoyuki Maeda ◽  
Makishi Nakayama ◽  
Hiroshi Narazaki ◽  
Akira Kitamura

Author(s):  
Ivan Herreros

This chapter discusses basic concepts from control theory and machine learning to facilitate a formal understanding of animal learning and motor control. It first distinguishes between feedback and feed-forward control strategies, and later introduces the classification of machine learning applications into supervised, unsupervised, and reinforcement learning problems. Next, it links these concepts with their counterparts in the domain of the psychology of animal learning, highlighting the analogies between supervised learning and classical conditioning, reinforcement learning and operant conditioning, and between unsupervised and perceptual learning. Additionally, it interprets innate and acquired actions from the standpoint of feedback vs anticipatory and adaptive control. Finally, it argues how this framework of translating knowledge between formal and biological disciplines can serve us to not only structure and advance our understanding of brain function but also enrich engineering solutions at the level of robot learning and control with insights coming from biology.


Electronics ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 999
Author(s):  
Ahmad Taher Azar ◽  
Anis Koubaa ◽  
Nada Ali Mohamed ◽  
Habiba A. Ibrahim ◽  
Zahra Fathy Ibrahim ◽  
...  

Unmanned Aerial Vehicles (UAVs) are increasingly being used in many challenging and diversified applications. These applications belong to the civilian and the military fields. To name a few; infrastructure inspection, traffic patrolling, remote sensing, mapping, surveillance, rescuing humans and animals, environment monitoring, and Intelligence, Surveillance, Target Acquisition, and Reconnaissance (ISTAR) operations. However, the use of UAVs in these applications needs a substantial level of autonomy. In other words, UAVs should have the ability to accomplish planned missions in unexpected situations without requiring human intervention. To ensure this level of autonomy, many artificial intelligence algorithms were designed. These algorithms targeted the guidance, navigation, and control (GNC) of UAVs. In this paper, we described the state of the art of one subset of these algorithms: the deep reinforcement learning (DRL) techniques. We made a detailed description of them, and we deduced the current limitations in this area. We noted that most of these DRL methods were designed to ensure stable and smooth UAV navigation by training computer-simulated environments. We realized that further research efforts are needed to address the challenges that restrain their deployment in real-life scenarios.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2534
Author(s):  
Oualid Doukhi ◽  
Deok-Jin Lee

Autonomous navigation and collision avoidance missions represent a significant challenge for robotics systems as they generally operate in dynamic environments that require a high level of autonomy and flexible decision-making capabilities. This challenge becomes more applicable in micro aerial vehicles (MAVs) due to their limited size and computational power. This paper presents a novel approach for enabling a micro aerial vehicle system equipped with a laser range finder to autonomously navigate among obstacles and achieve a user-specified goal location in a GPS-denied environment, without the need for mapping or path planning. The proposed system uses an actor–critic-based reinforcement learning technique to train the aerial robot in a Gazebo simulator to perform a point-goal navigation task by directly mapping the noisy MAV’s state and laser scan measurements to continuous motion control. The obtained policy can perform collision-free flight in the real world while being trained entirely on a 3D simulator. Intensive simulations and real-time experiments were conducted and compared with a nonlinear model predictive control technique to show the generalization capabilities to new unseen environments, and robustness against localization noise. The obtained results demonstrate our system’s effectiveness in flying safely and reaching the desired points by planning smooth forward linear velocity and heading rates.


Author(s):  
Ju Xie ◽  
Xing Xu ◽  
Feng Wang ◽  
Haobin Jiang

The driver model is the decision-making and control center of intelligent vehicle. In order to improve the adaptability of intelligent vehicles under complex driving conditions, and simulate the manipulation characteristics of the skilled driver under the driver-vehicle-road closed-loop system, a kind of human-like longitudinal driver model for intelligent vehicles based on reinforcement learning is proposed. This paper builds the lateral driver model for intelligent vehicles based on optimal preview control theory. Then, the control correction link of longitudinal driver model is established to calculate the throttle opening or brake pedal travel for the desired longitudinal acceleration. Moreover, the reinforcement learning agents for longitudinal driver model is parallel trained by comprehensive evaluation index and skilled driver data. Lastly, training performance and scenarios verification between the simulation experiment and the real car test are performed to verify the effectiveness of the reinforcement learning based longitudinal driver model. The results show that the proposed human-like longitudinal driver model based on reinforcement learning can help intelligent vehicles effectively imitate the speed control behavior of the skilled driver in various path-following scenarios.


Author(s):  
Dimitris C. Dracopoulos ◽  
Dimitrios Effraimidis

Computational intelligence techniques such as neural networks, fuzzy logic, and hybrid neuroevolutionary and neuro-fuzzy methods have been successfully applied to complex control problems in the last two decades. Genetic programming, a field under the umbrella of evolutionary computation, has not been applied to a sufficiently large number of challenging and difficult control problems, in order to check its viability as a general methodology to such problems. Helicopter hovering control is considered a challenging control problem in the literature and has been included in the set of benchmarks of recent reinforcement learning competitions for deriving new intelligent controllers. This chapter shows how genetic programming can be applied for the derivation of controllers in this nonlinear, high dimensional, complex control system. The evolved controllers are compared with a neuroevolutionary approach that won the first position in the 2008 helicopter hovering reinforcement learning competition. The two approaches perform similarly (and in some cases GP performs better than the winner of the competition), even in the case where unknown wind is added to the dynamic system and control is based on structures evolved previously, that is, the evolved controllers have good generalization capability.


Sign in / Sign up

Export Citation Format

Share Document