scholarly journals Visual Navigation for Mobile Robots

Robot Vision ◽  
10.5772/9292 ◽  
2010 ◽  
Author(s):  
Nils Axel ◽  
Jens Christian ◽  
Enis Bayramoglu ◽  
Ole Rav
2008 ◽  
Vol 05 (03) ◽  
pp. 223-233 ◽  
Author(s):  
RONG LIU ◽  
MAX Q. H. MENG

Time-to-contact (TTC) provides vital information for obstacle avoidance and for the visual navigation of a robot. In this paper, we present a novel method to estimate the TTC information of a moving object for monocular mobile robots. In specific, the contour of the moving object is extracted first using an active contour model; then the height of the motion contour and its temporal derivative are evaluated to generate the desired TTC estimates. Compared with conventional techniques employing the first-order derivatives of optical flow, the proposed estimator is less prone to errors of optical flow. Experiments using real-world images are conducted and the results demonstrate that the developed method can successfully achieve TTC with an average relative error (ARVE) of 0.039 with a single calibrated camera.


2015 ◽  
Vol 27 (4) ◽  
pp. 392-400 ◽  
Author(s):  
Keita Kurashiki ◽  
◽  
Mareus Aguilar ◽  
Sakon Soontornvanichkit

<div class=""abs_img""> <img src=""[disp_template_path]/JRM/abst-image/00270004/09.jpg"" width=""300"" /> Mobile robot with a stereo camera</div> Autonomous mobile robots has been an active research recently. In Japan, the Tsukuba Challenge is held annually since 2007 in order to realize autonomous mobile robots that coexist with human beings safely in society. Through technological incentives of such effort, laser range finder (LRF) based navigation has rapidly improved. A technical issue of these techniques is the reduction of the prior information because most of them require precise 3D model of the environment, that is poor in both maintainability and scalability. On the other hand, in spite of intensive studies on vision based navigation using cameras, no robot in the Challenge could achieve full camera navigation. In this paper, an image based control law to follow the road boundary is proposed. This method is a part of the topological navigation to reduce prior information and enhance scalability of the map. As the controller is designed based on the interaction model of the robot motion and image feature in the front image, the method is robust to the camera calibration error. The proposed controller is tested through several simulations and indoor/outdoor experiments to verify its performance and robustness. Finally, our results in Tsukuba Challenge 2014 using the proposed controller is presented. </span>


2020 ◽  
Vol 53 (1) ◽  
pp. 1-34 ◽  
Author(s):  
Yuri D. V. Yasuda ◽  
Luiz Eduardo G. Martins ◽  
Fabio A. M. Cappabianco

2008 ◽  
Vol 53 (3) ◽  
pp. 263-296 ◽  
Author(s):  
Francisco Bonin-Font ◽  
Alberto Ortiz ◽  
Gabriel Oliver

Author(s):  
Ezebuugo Nwaonumah ◽  
Biswanath Samanta

Abstract A study is presented on applying deep reinforcement learning (DRL) for visual navigation of wheeled mobile robots (WMR), both in simulation and real-time implementation under dynamic and unknown environments. The policy gradient based asynchronous advantage actor critic (A3C), has been considered. RGB (red, green and blue) and depth images have been used as inputs in implementation of A3C algorithm to generate control commands for autonomous navigation of WMR. The initial A3C network was generated and trained progressively in OpenAI Gym Gazebo based simulation environments within robot operating system (ROS) framework for a popular target WMR, Kobuki TurtleBot2. A pre-trained deep neural network ResNet50 was used after further training with regrouped objects commonly found in laboratory setting for target-driven visual navigation of Turlebot2 through DRL. The performance of A3C with multiple computation threads (4, 6, and 8) was simulated and compared in three simulation environments. The performance of A3C improved with number of threads. The trained model of A3C with 8 threads was implemented with online learning using Nvidia Jetson TX2 on-board Turtlebot2 for mapless navigation in different real-life environments. Details of the methodology, results of simulation and real-time implementation through transfer learning are presented along with recommendations for future work.


Sign in / Sign up

Export Citation Format

Share Document