Dynamic omnidirectional vision for mobile robots

1986 ◽  
Vol 3 (1) ◽  
pp. 5-17 ◽  
Author(s):  
Zuo L. Cao ◽  
Sung J. Oh ◽  
Ernest L. Hall
2006 ◽  
Vol 54 (9) ◽  
pp. 758-765 ◽  
Author(s):  
Hashem Tamimi ◽  
Henrik Andreasson ◽  
André Treptow ◽  
Tom Duckett ◽  
Andreas Zell

2015 ◽  
Vol 45 (8) ◽  
pp. 1633-1646 ◽  
Author(s):  
Luyang Li ◽  
Yun-Hui Liu ◽  
Kai Wang ◽  
Mu Fang

2017 ◽  
Vol 2017 ◽  
pp. 1-14 ◽  
Author(s):  
Rodrigo Munguía ◽  
Carlos López-Franco ◽  
Emmanuel Nuño ◽  
Adriana López-Franco

This work presents a method for implementing a visual-based simultaneous localization and mapping (SLAM) system using omnidirectional vision data, with application to autonomous mobile robots. In SLAM, a mobile robot operates in an unknown environment using only on-board sensors to simultaneously build a map of its surroundings, which it uses to track its position. The SLAM is perhaps one of the most fundamental problems to solve in robotics to build mobile robots truly autonomous. The visual sensor used in this work is an omnidirectional vision sensor; this sensor provides a wide field of view which is advantageous in a mobile robot in an autonomous navigation task. Since the visual sensor used in this work is monocular, a method to recover the depth of the features is required. To estimate the unknown depth we propose a novel stochastic triangulation technique. The system proposed in this work can be applied to indoor or cluttered environments for performing visual-based navigation when GPS signal is not available. Experiments with synthetic and real data are presented in order to validate the proposal.


2012 ◽  
Vol 2012 ◽  
pp. 1-16 ◽  
Author(s):  
Jones Y. Mori ◽  
Janier Arias-Garcia ◽  
Camilo Sánchez-Ferreira ◽  
Daniel M. Muñoz ◽  
Carlos H. Llanos ◽  
...  

This work presents the development of an integrated hardware/software sensor system for moving object detection and distance calculation, based on background subtraction algorithm. The sensor comprises a catadioptric system composed by a camera and a convex mirror that reflects the environment to the camera from all directions, obtaining a panoramic view. The sensor is used as an omnidirectional vision system, allowing for localization and navigation tasks of mobile robots. Several image processing operations such as filtering, segmentation and morphology have been included in the processing architecture. For achieving distance measurement, an algorithm to determine the center of mass of a detected object was implemented. The overall architecture has been mapped onto a commercial low-cost FPGA device, using a hardware/software co-design approach, which comprises a Nios II embedded microprocessor and specific image processing blocks, which have been implemented in hardware. The background subtraction algorithm was also used to calibrate the system, allowing for accurate results. Synthesis results show that the system can achieve a throughput of 26.6 processed frames per second and the performance analysis pointed out that the overall architecture achieves a speedup factor of 13.78 in comparison with a PC-based solution running on the real-time operating system xPC Target.


2017 ◽  
Vol 2017 ◽  
pp. 1-20 ◽  
Author(s):  
L. Payá ◽  
A. Gil ◽  
O. Reinoso

Nowadays, the field of mobile robotics is experiencing a quick evolution, and a variety of autonomous vehicles is available to solve different tasks. The advances in computer vision have led to a substantial increase in the use of cameras as the main sensors in mobile robots. They can be used as the only source of information or in combination with other sensors such as odometry or laser. Among vision systems, omnidirectional sensors stand out due to the richness of the information they provide the robot with, and an increasing number of works about them have been published over the last few years, leading to a wide variety of frameworks. In this review, some of the most important works are analysed. One of the key problems the scientific community is addressing currently is the improvement of the autonomy of mobile robots. To this end, building robust models of the environment and solving the localization and navigation problems are three important abilities that any mobile robot must have. Taking it into account, the review concentrates on these problems; how researchers have addressed them by means of omnidirectional vision; the main frameworks they have proposed; and how they have evolved in recent years.


Author(s):  
SEYED HAMIDREZA MOHADES KASAEI ◽  
S.M. KASAEI ◽  
S.A. KASAEI ◽  
M. TAHERI ◽  
N. AHMADI ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document