Deep Reinforcement Learning For Visual Navigation of Wheeled Mobile Robots

Author(s):  
Ezebuugo Nwaonumah ◽  
Biswanath Samanta
Author(s):  
Ezebuugo Nwaonumah ◽  
Biswanath Samanta

Abstract A study is presented on applying deep reinforcement learning (DRL) for visual navigation of wheeled mobile robots (WMR), both in simulation and real-time implementation under dynamic and unknown environments. The policy gradient based asynchronous advantage actor critic (A3C), has been considered. RGB (red, green and blue) and depth images have been used as inputs in implementation of A3C algorithm to generate control commands for autonomous navigation of WMR. The initial A3C network was generated and trained progressively in OpenAI Gym Gazebo based simulation environments within robot operating system (ROS) framework for a popular target WMR, Kobuki TurtleBot2. A pre-trained deep neural network ResNet50 was used after further training with regrouped objects commonly found in laboratory setting for target-driven visual navigation of Turlebot2 through DRL. The performance of A3C with multiple computation threads (4, 6, and 8) was simulated and compared in three simulation environments. The performance of A3C improved with number of threads. The trained model of A3C with 8 threads was implemented with online learning using Nvidia Jetson TX2 on-board Turtlebot2 for mapless navigation in different real-life environments. Details of the methodology, results of simulation and real-time implementation through transfer learning are presented along with recommendations for future work.


2021 ◽  
Author(s):  
Francois Gauthier-Clerc ◽  
Ashley Hill ◽  
Jean Laneurit ◽  
Roland Lenain ◽  
Eric Lucet

2014 ◽  
Vol 37 (2) ◽  
pp. 137-156 ◽  
Author(s):  
Héctor M. Becerra ◽  
Carlos Sagüés ◽  
Youcef Mezouar ◽  
Jean-Bernard Hayet

Sign in / Sign up

Export Citation Format

Share Document