Collision Anticipation via Deep Reinforcement Learning for Visual Navigation

Author(s):  
Eduardo Gutiérrez-Maestro ◽  
Roberto J. López-Sastre ◽  
Saturnino Maldonado-Bascón
Author(s):  
Zhenhuan Rao ◽  
Yuechen Wu ◽  
Zifei Yang ◽  
Wei Zhang ◽  
Shijian Lu ◽  
...  

2021 ◽  
Author(s):  
Srivatsan Krishnan ◽  
Behzad Boroujerdian ◽  
William Fu ◽  
Aleksandra Faust ◽  
Vijay Janapa Reddi

AbstractWe introduce Air Learning, an open-source simulator, and a gym environment for deep reinforcement learning research on resource-constrained aerial robots. Equipped with domain randomization, Air Learning exposes a UAV agent to a diverse set of challenging scenarios. We seed the toolset with point-to-point obstacle avoidance tasks in three different environments and Deep Q Networks (DQN) and Proximal Policy Optimization (PPO) trainers. Air Learning assesses the policies’ performance under various quality-of-flight (QoF) metrics, such as the energy consumed, endurance, and the average trajectory length, on resource-constrained embedded platforms like a Raspberry Pi. We find that the trajectories on an embedded Ras-Pi are vastly different from those predicted on a high-end desktop system, resulting in up to $$40\%$$ 40 % longer trajectories in one of the environments. To understand the source of such discrepancies, we use Air Learning to artificially degrade high-end desktop performance to mimic what happens on a low-end embedded system. We then propose a mitigation technique that uses the hardware-in-the-loop to determine the latency distribution of running the policy on the target platform (onboard compute on aerial robot). A randomly sampled latency from the latency distribution is then added as an artificial delay within the training loop. Training the policy with artificial delays allows us to minimize the hardware gap (discrepancy in the flight time metric reduced from 37.73% to 0.5%). Thus, Air Learning with hardware-in-the-loop characterizes those differences and exposes how the onboard compute’s choice affects the aerial robot’s performance. We also conduct reliability studies to assess the effect of sensor failures on the learned policies. All put together, Air Learning enables a broad class of deep RL research on UAVs. The source code is available at: https://github.com/harvard-edge/AirLearning.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 135426-135442 ◽  
Author(s):  
Fanyu Zeng ◽  
Chen Wang ◽  
Shuzhi Sam Ge

Author(s):  
Ezebuugo Nwaonumah ◽  
Biswanath Samanta

Abstract A study is presented on applying deep reinforcement learning (DRL) for visual navigation of wheeled mobile robots (WMR), both in simulation and real-time implementation under dynamic and unknown environments. The policy gradient based asynchronous advantage actor critic (A3C), has been considered. RGB (red, green and blue) and depth images have been used as inputs in implementation of A3C algorithm to generate control commands for autonomous navigation of WMR. The initial A3C network was generated and trained progressively in OpenAI Gym Gazebo based simulation environments within robot operating system (ROS) framework for a popular target WMR, Kobuki TurtleBot2. A pre-trained deep neural network ResNet50 was used after further training with regrouped objects commonly found in laboratory setting for target-driven visual navigation of Turlebot2 through DRL. The performance of A3C with multiple computation threads (4, 6, and 8) was simulated and compared in three simulation environments. The performance of A3C improved with number of threads. The trained model of A3C with 8 threads was implemented with online learning using Nvidia Jetson TX2 on-board Turtlebot2 for mapless navigation in different real-life environments. Details of the methodology, results of simulation and real-time implementation through transfer learning are presented along with recommendations for future work.


Sign in / Sign up

Export Citation Format

Share Document