An efficient similarity metric for omnidirectional vision sensors

2006 ◽  
Vol 54 (9) ◽  
pp. 750-757 ◽  
Author(s):  
Emanuele Frontoni ◽  
Primo Zingaretti
2016 ◽  
Vol 2016 ◽  
pp. 1-21 ◽  
Author(s):  
L. Payá ◽  
O. Reinoso ◽  
Y. Berenguer ◽  
D. Úbeda

Nowadays, the design of fully autonomous mobile robots is a key discipline. Building a robust model of the unknown environment is an important ability the robot must develop. Using this model, this robot must be able to estimate its current position and to navigate to the target points. The use of omnidirectional vision sensors is usual to solve these tasks. When using this source of information, the robot must extract relevant information from the scenes both to build the model and to estimate its position. The possible frameworks include the classical approach of extracting and describing local features or working with the global appearance of the scenes, which has emerged as a conceptually simple and robust solution. While feature-based techniques have been extensively studied in the literature, appearance-based ones require a full comparative evaluation to reveal the performance of the existing methods and to tune correctly their parameters. This work carries out a comparative evaluation of four global-appearance techniques in map building tasks, using omnidirectional visual information as the only source of data from the environment.


2004 ◽  
Vol 35 (2) ◽  
pp. 79-90 ◽  
Author(s):  
Takushi Sogo ◽  
Hiroshi Ishiguro ◽  
Mohan M. Trivedi

2017 ◽  
Vol 2017 ◽  
pp. 1-20 ◽  
Author(s):  
L. Payá ◽  
A. Gil ◽  
O. Reinoso

Nowadays, the field of mobile robotics is experiencing a quick evolution, and a variety of autonomous vehicles is available to solve different tasks. The advances in computer vision have led to a substantial increase in the use of cameras as the main sensors in mobile robots. They can be used as the only source of information or in combination with other sensors such as odometry or laser. Among vision systems, omnidirectional sensors stand out due to the richness of the information they provide the robot with, and an increasing number of works about them have been published over the last few years, leading to a wide variety of frameworks. In this review, some of the most important works are analysed. One of the key problems the scientific community is addressing currently is the improvement of the autonomy of mobile robots. To this end, building robust models of the environment and solving the localization and navigation problems are three important abilities that any mobile robot must have. Taking it into account, the review concentrates on these problems; how researchers have addressed them by means of omnidirectional vision; the main frameworks they have proposed; and how they have evolved in recent years.


2003 ◽  
Vol 14 (2) ◽  
pp. 129-138 ◽  
Author(s):  
T. Nakamura ◽  
M. Oohara ◽  
T. Ogasawara ◽  
H. Ishiguro

Sign in / Sign up

Export Citation Format

Share Document