Computer Vision based Autonomous Navigation in Controlled Environment

Author(s):  
Sarmad Shafique ◽  
Samia Abid ◽  
Faisal Riaz ◽  
Zainab Ejaz
Author(s):  
Prabha Ramasamy ◽  
Mohan Kabadi

Navigational service is one of the most essential dependency towards any transport system and at present, there are various revolutionary approaches that has contributed towards its improvement. This paper has reviewed the global positioning system (GPS) and computer vision based navigational system and found that there is a large gap between the actual demand of navigation and what currently exists. Therefore, the proposed study discusses about a novel framework of an autonomous navigation system that uses GPS as well as computer vision considering the case study of futuristic road traffic system. An analytical model is built up where the geo-referenced data from GPS is integrated with the signals captured from the visual sensors are considered to implement this concept. The simulated outcome of the study shows that proposed study offers enhanced accuracy as well as faster processing in contrast to existing approaches.


Author(s):  
Gabriel Araujo ◽  
Vitor Akihiro Hisano Higuti ◽  
Marcelo Becker

2018 ◽  
Vol 2018 ◽  
pp. 1-16 ◽  
Author(s):  
Muhammad Ilyas ◽  
Shi Yuyao ◽  
Rajesh Elara Mohan ◽  
Manojkumar Devarassu ◽  
Manivannan Kalimuthu

The mechanical, electrical, and autonomy aspects of designing a novel, modular, and reconfigurable cleaning robot, dubbed as sTetro (stair Tetro), are presented. The developed robotic platform uses a vertical conveyor mechanism to reconfigure itself and is capable of navigating over flat surfaces as well as staircases, thus significantly extending the automated cleaning capabilities as compared to conventional home cleaning robots. The mechanical design and system architecture are introduced first, followed by a detailed description of system modelling and controller design efforts in sTetro. An autonomy algorithm is also proposed for self-reconfiguration, locomotion, and autonomous navigation of sTetro in the controlled environment, for example, in homes/offices with a flat floor and a straight staircase. A staircase recognition algorithm is presented to distinguish between the surrounding environment and the stairs. The misalignment detection technique of the robot with a front staircase riser is also given, and a feedback from the IMU sensor for misalignment corrective measures is provided. The experiments performed with the sTetro robot demonstrated the efficacy and validity of the developed system models, control, and autonomy approaches.


2019 ◽  
Vol 4 (2) ◽  
pp. 1351-1356 ◽  
Author(s):  
Adrian Manzanilla ◽  
Sergio Reyes ◽  
Miguel Garcia ◽  
Diego Mercado ◽  
Rogelio Lozano

Author(s):  
Mubariz Zaffar ◽  
Sourav Garg ◽  
Michael Milford ◽  
Julian Kooij ◽  
David Flynn ◽  
...  

AbstractVisual place recognition (VPR) is the process of recognising a previously visited place using visual information, often under varying appearance conditions and viewpoint changes and with computational constraints. VPR is related to the concepts of localisation, loop closure, image retrieval and is a critical component of many autonomous navigation systems ranging from autonomous vehicles to drones and computer vision systems. While the concept of place recognition has been around for many years, VPR research has grown rapidly as a field over the past decade due to improving camera hardware and its potential for deep learning-based techniques, and has become a widely studied topic in both the computer vision and robotics communities. This growth however has led to fragmentation and a lack of standardisation in the field, especially concerning performance evaluation. Moreover, the notion of viewpoint and illumination invariance of VPR techniques has largely been assessed qualitatively and hence ambiguously in the past. In this paper, we address these gaps through a new comprehensive open-source framework for assessing the performance of VPR techniques, dubbed “VPR-Bench”. VPR-Bench (Open-sourced at: https://github.com/MubarizZaffar/VPR-Bench) introduces two much-needed capabilities for VPR researchers: firstly, it contains a benchmark of 12 fully-integrated datasets and 10 VPR techniques, and secondly, it integrates a comprehensive variation-quantified dataset for quantifying viewpoint and illumination invariance. We apply and analyse popular evaluation metrics for VPR from both the computer vision and robotics communities, and discuss how these different metrics complement and/or replace each other, depending upon the underlying applications and system requirements. Our analysis reveals that no universal SOTA VPR technique exists, since: (a) state-of-the-art (SOTA) performance is achieved by 8 out of the 10 techniques on at least one dataset, (b) SOTA technique in one community does not necessarily yield SOTA performance in the other given the differences in datasets and metrics. Furthermore, we identify key open challenges since: (c) all 10 techniques suffer greatly in perceptually-aliased and less-structured environments, (d) all techniques suffer from viewpoint variance where lateral change has less effect than 3D change, and (e) directional illumination change has more adverse effects on matching confidence than uniform illumination change. We also present detailed meta-analyses regarding the roles of varying ground-truths, platforms, application requirements and technique parameters. Finally, VPR-Bench provides a unified implementation to deploy these VPR techniques, metrics and datasets, and is extensible through templates.


Author(s):  
G. Garibotto ◽  
P. Bassino ◽  
M. Ilic ◽  
S. Masciangelo

Sign in / Sign up

Export Citation Format

Share Document