An Autonomous Vision System Based Sensor-Motor Coordination for Mobile Robot Navigation in an Outdoor Environment

Author(s):  
Xiaochun Wang ◽  
Xiali Wang ◽  
Don Mitchell Wilkes
2013 ◽  
Vol 3 (1) ◽  
pp. 4
Author(s):  
Muhammad Safwan ◽  
Muhammad Yasir Zaheen ◽  
M. Anwar Ahmed ◽  
Muhammad Shujaat Kamal ◽  
Raj Kumar

Bio-Mimetic Vision System (BMVS) for AutonomousMobile Robot Navigation encompasses three major fields, namelyrobotics, navigation and obstacle avoidance. Bio-mimetic vision isbased on stereo vision. Summation of Absolute Difference (SAD)is applied on the images from the two cameras and disparity mapis generated which is then used to navigate and avoid obstacles.Camera calibration and SAD is applied on Matlab software.AT89C52 microcontroller, along with Matlab, is used to efficientlycontrol the DC motors mounted on the robot frame. It is observedfrom experimental results that the developed system effectivelydistinguishes objects at different distances and avoids them whenthe path is blocked.


1999 ◽  
Vol 17 (7) ◽  
pp. 1009-1016 ◽  
Author(s):  
Takushi Sogo ◽  
Katsumi Kimoto ◽  
Hiroshi Ishiguro ◽  
Toru Ishida

2001 ◽  
Vol 19 (2) ◽  
pp. 121-137 ◽  
Author(s):  
Takushi Sogo ◽  
Hiroshi Ishiguro ◽  
Toru Ishida

Measurement ◽  
2011 ◽  
Vol 44 (4) ◽  
pp. 620-641 ◽  
Author(s):  
N. Nirmal Singh ◽  
Avishek Chatterjee ◽  
Amitava Chatterjee ◽  
Anjan Rakshit

Robotica ◽  
2008 ◽  
Vol 26 (1) ◽  
pp. 99-107 ◽  
Author(s):  
M. Mata ◽  
J. M. Armingol ◽  
J. Fernández ◽  
A. de la Escalera

SUMMARYDeformable models have been studied in image analysis over the last decade and used for recognition of flexible or rigid templates under diverse viewing conditions. This article addresses the question of how to define a deformable model for a real-time color vision system for mobile robot navigation. Instead of receiving the detailed model definition from the user, the algorithm extracts and learns the information from each object automatically. How well a model represents the template that exists in the image is measured by an energy function. Its minimum corresponds to the model that best fits with the image and it is found by a genetic algorithm that handles the model deformation. At a later stage, if there is symbolic information inside the object, it is extracted and interpreted using a neural network. The resulting perception module has been integrated successfully in a complex navigation system. Various experimental results in real environments are presented in this article, showing the effectiveness and capacity of the system.


Sign in / Sign up

Export Citation Format

Share Document