Coarse-to-Fine Visual Place Recognition

2021 ◽  
pp. 28-39
Author(s):  
Junkun Qi ◽  
Rui Wang ◽  
Chuan Wang ◽  
Xiaochun Cao
Sensors ◽  
2020 ◽  
Vol 20 (15) ◽  
pp. 4177 ◽  
Author(s):  
Yicheng Fang ◽  
Kailun Yang ◽  
Ruiqi Cheng ◽  
Lei Sun ◽  
Kaiwei Wang

Visual Place Recognition (VPR) addresses visual instance retrieval tasks against discrepant scenes and gives precise localization. During a traverse, the captured images (query images) would be traced back to the already existing positions in the database images, rendering vehicles or pedestrian navigation devices distinguish ambient environments. Unfortunately, diverse appearance variations can bring about huge challenges for VPR, such as illumination changing, viewpoint varying, seasonal cycling, disparate traverses (forward and backward), and so on. In addition, the majority of current VPR algorithms are designed for forward-facing images, which can only provide with narrow Field of View (FoV) and come with severe viewpoint influences. In this paper, we propose a panoramic localizer, which is based on coarse-to-fine descriptors, leveraging panoramas for omnidirectional perception and sufficient FoV up to 360∘. We adopt NetVLAD descriptors in the coarse matching in a panorama-to-panorama way, for their robust performances in distinguishing different appearances, utilizing Geodesc keypoint descriptors in the fine stage in the meantime, for their capacity of detecting detailed information, formatting powerful coarse-to-fine descriptors. A comprehensive set of experiments is conducted on several datasets including both public benchmarks and our real-world campus scenes. Our system is proved to be with high recall and strong generalization capacity across various appearances. The proposed panoramic localizer can be integrated into mobile navigation devices, available for a variety of localization application scenarios.


2021 ◽  
Vol 11 (20) ◽  
pp. 9540
Author(s):  
Baifan Chen ◽  
Xiaoting Song ◽  
Hongyu Shen ◽  
Tao Lu

A major challenge in place recognition is to be robust against viewpoint changes and appearance changes caused by self and environmental variations. Humans achieve this by recognizing objects and their relationships in the scene under different conditions. Inspired by this, we propose a hierarchical visual place recognition pipeline based on semantic-aggregation and scene understanding for the images. The pipeline contains coarse matching and fine matching. Semantic-aggregation happens in residual aggregation of visual information and semantic information in coarse matching, and semantic association of semantic edges in fine matching. Through the above two processes, we realized a robust coarse-to-fine pipeline of visual place recognition across viewpoint and condition variations. Experimental results on the benchmark datasets show that our method performs better than several state-of-the-art methods, improving the robustness against severe viewpoint changes and appearance changes while maintaining good matching-time performance. Moreover, we prove that it is possible for a computer to realize place recognition based on scene understanding.


2015 ◽  
Vol 35 (4) ◽  
pp. 334-356 ◽  
Author(s):  
Elena S. Stumm ◽  
Christopher Mei ◽  
Simon Lacroix

2021 ◽  
Vol 6 (3) ◽  
pp. 5976-5983
Author(s):  
Maria Waheed ◽  
Michael Milford ◽  
Klaus McDonald-Maier ◽  
Shoaib Ehsan

Author(s):  
Timothy L. Molloy ◽  
Tobias Fischer ◽  
Michael J. Milford ◽  
Girish Nair

Sign in / Sign up

Export Citation Format

Share Document