visual homing
Recently Published Documents


TOTAL DOCUMENTS

72
(FIVE YEARS 2)

H-INDEX

15
(FIVE YEARS 0)

Robotica ◽  
2019 ◽  
Vol 38 (5) ◽  
pp. 787-803
Author(s):  
D. M. Lyons ◽  
B. Barriage ◽  
L. Del Signore

SummaryVisual homing is a local navigation technique used to direct a robot to a previously seen location by comparing the image of the original location with the current visual image. Prior work has shown that exploiting depth cues such as image scale or stereo-depth in homing leads to improved homing performance. While it is not unusual to use a panoramic field of view (FOV) camera in visual homing, it is unusual to have a panoramic FOV stereo-camera. So, while the availability of stereo-depth information may improve performance, the concomitant-restricted FOV may be a detriment to performance, unless specialized stereo hardware is used. In this paper, we present an investigation of the effect on homing performance of varying the FOV widths in a stereo-vision-based visual homing algorithm using a common stereo-camera. We have collected six stereo-vision homing databases – three indoor and three outdoor. Based on over 350,000 homing trials, we show that while a larger FOV yields performance improvements for larger homing offset angles, the relative improvement falls off with increasing FOVs, and in fact decreases for the widest FOV tested. We conduct additional experiments to identify the cause of this fall-off in performance, which we term the ‘blinder’ effect, and which we predict should affect other correspondence-based visual homing algorithms.


2018 ◽  
Vol 66 (1) ◽  
pp. 165-170 ◽  
Author(s):  
I. J. S. Moreira ◽  
M. F. Santos ◽  
M. S. Madureira
Keyword(s):  

Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3180 ◽  
Author(s):  
Xun Ji ◽  
Qidan Zhu ◽  
Junda Ma ◽  
Peng Lu ◽  
Tianhao Yan

Visual homing is an attractive autonomous mobile robot navigation technique, which only uses vision sensors to guide the robot to the specified target location. Landmark is the only input form of the visual homing approaches, which is usually represented by scale-invariant features. However, the landmark distribution has a great impact on the homing performance of the robot, as irregularly distributed landmarks will significantly reduce the navigation precision. In this paper, we propose three strategies to solve this problem. We use scale-invariant feature transform (SIFT) features as natural landmarks, and the proposed strategies can optimize the landmark distribution without over-eliminating landmarks or increasing calculation amount. Experiments on both panoramic image databases and a real mobile robot have verified the effectiveness and feasibility of the proposed strategies.


2018 ◽  
Vol 8 (4) ◽  
pp. 20180010 ◽  
Author(s):  
Thomas Stone ◽  
Michael Mangan ◽  
Antoine Wystrach ◽  
Barbara Webb

Visual memory is crucial to navigation in many animals, including insects. Here, we focus on the problem of visual homing, that is, using comparison of the view at a current location with a view stored at the home location to control movement towards home by a novel shortcut. Insects show several visual specializations that appear advantageous for this task, including almost panoramic field of view and ultraviolet light sensitivity, which enhances the salience of the skyline. We discuss several proposals for subsequent processing of the image to obtain the required motion information, focusing on how each might deal with the problem of yaw rotation of the current view relative to the home view. Possible solutions include tagging of views with information from the celestial compass system, using multiple views pointing towards home, or rotation invariant encoding of the view. We illustrate briefly how a well-known shape description method from computer vision, Zernike moments, could provide a compact and rotation invariant representation of sky shapes to enhance visual homing. We discuss the biological plausibility of this solution, and also a fourth strategy, based on observed behaviour of insects, that involves transfer of information from visual memory matching to the compass system.


Author(s):  
Jiayi Ma ◽  
Ji Zhao ◽  
Junjun Jiang ◽  
Huabing Zhou ◽  
Yu Zhou ◽  
...  

Author(s):  
Hao Cai ◽  
Sipan Ye ◽  
Andrew Vardy ◽  
Minglun Gong
Keyword(s):  

2018 ◽  
Vol 37 (2-3) ◽  
pp. 225-248
Author(s):  
Annett Stelzer ◽  
Mallikarjuna Vayugundla ◽  
Elmar Mair ◽  
Michael Suppa ◽  
Wolfram Burgard
Keyword(s):  

IEEE Access ◽  
2018 ◽  
Vol 6 ◽  
pp. 33666-33681 ◽  
Author(s):  
Changmin Lee ◽  
Daeeun Kim
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document