Efficient Search in a Panoramic Image Database for Long-term Visual Localization

Author(s):  
Semih Orhan ◽  
Yalin Bastanlar
Author(s):  
Mathias Burki ◽  
Marcin Dymczyk ◽  
Igor Gilitschenski ◽  
Cesar Cadena ◽  
Roland Siegwart ◽  
...  

2020 ◽  
Vol 5 (2) ◽  
pp. 1492-1499
Author(s):  
Lee Clement ◽  
Mona Gridseth ◽  
Justin Tomasi ◽  
Jonathan Kelly

2016 ◽  
Author(s):  
Ali Ghazizadeh ◽  
Whitney Griggs ◽  
Okihide Hikosaka

For most animals, survival depends on rapid detection of rewarding objects, but search for an object surrounded by many others is known to be difficult and time consuming. However, there is neuronal evidence for robust and rapid differentiation of objects based on their reward history in primates (Hikosaka et al., 2014). We hypothesized that such robust coding should support efficient search for high-value objects, similar to a pop-out mechanism. To test this hypothesis, we let subjects (n=4, macaque monkeys) view a large number of complex objects with consistently biased rewards with variable training durations (1, 5 or >30days). Following training, subjects searched for a high-value object (Good) among a variable number of low-value objects (Bad). Consistent with our hypothesis, we found that Good objects were accurately and quickly targeted, often by a single and direct saccade with a very short latency (<200ms). The dependence of search times on display size reduced significantly with longer reward training, giving rise to a more efficient search (40ms/item to 16ms/item). This object-finding skill showed a large capacity for value-biased objects and was maintained in the long-term memory with no interference from reward learning with other objects. Such object-finding skill, particularly its large capacity and its long term retention, would be crucial for maximizing rewards and biological fitness throughout life where many objects are experienced continuously and/or intermittently.


2019 ◽  
Vol 12 (4) ◽  
pp. 149-155
Author(s):  
Tomoya KANEKO ◽  
Junji TAKAHASHI ◽  
Seiya ITO ◽  
Yoshito TOBE

Author(s):  
J. Meyer ◽  
D. Rettenmund ◽  
S. Nebiker

Abstract. In this paper, we present our approach for robust long-term visual localization in large scale urban environments exploiting street level imagery. Our approach consists of a 2D-image based localization using image retrieval (NetVLAD) to select reference images. This is followed by a 3D-structure based localization with a robust image matcher (DenseSfM) for accurate pose estimation. This visual localization approach is evaluated by means of the ‘Sun’ subset of the RobotCar seasons dataset, which is part of the Visual Localization benchmark. As the results on the RobotCar benchmark dataset are nearly on par with the top ranked approaches, we focused our investigations on reproducibility and performance with own data. For this purpose, we created a dataset with street-level imagery. In order to have independent reference and query images, we used a road-based and a tram-based mapping campaign with a time difference of four years. The approximately 90% successfully oriented images of both datasets are a good indicator for the robustness of our approach. With about 50% success rate, every second image could be localized with a position accuracy better than 0.25 m and a rotation accuracy better than 2°.


Sign in / Sign up

Export Citation Format

Share Document