A Novel Image Retrieval Method for Image Based Localization in Large-Scale Environment

Author(s):  
Xiliang Yin ◽  
Lin Ma ◽  
Xuezhi Tan
IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 160082-160090 ◽  
Author(s):  
Yanyan Xu ◽  
Xiao Zhao ◽  
Jiaying Gong

2013 ◽  
Vol 321-324 ◽  
pp. 969-973
Author(s):  
Tian Qiang Peng ◽  
Xiao Feng Sun

To reduce the high memory cost of fast retireval method, we present a fast retrieval method based on bucket map chain on the basis of Exact Euclidean Locality Sensitive Hashing (E2LSH). The bucket map chain contains all the points projected from feature space in multiple buckets, which store the nearby points. When conducting query, it searches the chain by the bucket index of query point and locates the position of related buckets, then reads the related points in related buckets and measurs the similarity of these points with query point. The experiments show that this method can efficiently decrease the memory cost of retrieval. It is very important for increasing the feasibility of large scale information retrieval especially image retrieval.


2021 ◽  
Vol 7 (2) ◽  
pp. 20
Author(s):  
Carlos Lassance ◽  
Yasir Latif ◽  
Ravi Garg ◽  
Vincent Gripon ◽  
Ian Reid

Vision-based localization is the problem of inferring the pose of the camera given a single image. One commonly used approach relies on image retrieval where the query input is compared against a database of localized support examples and its pose is inferred with the help of the retrieved items. This assumes that images taken from the same places consist of the same landmarks and thus would have similar feature representations. These representations can learn to be robust to different variations in capture conditions like time of the day or weather. In this work, we introduce a framework which aims at enhancing the performance of such retrieval-based localization methods. It consists in taking into account additional information available, such as GPS coordinates or temporal proximity in the acquisition of the images. More precisely, our method consists in constructing a graph based on this additional information that is later used to improve reliability of the retrieval process by filtering the feature representations of support and/or query images. We show that the proposed method is able to significantly improve the localization accuracy on two large scale datasets, as well as the mean average precision in classical image retrieval scenarios.


2021 ◽  
Vol 13 (5) ◽  
pp. 869
Author(s):  
Zheng Zhuo ◽  
Zhong Zhou

In recent years, the amount of remote sensing imagery data has increased exponentially. The ability to quickly and effectively find the required images from massive remote sensing archives is the key to the organization, management, and sharing of remote sensing image information. This paper proposes a high-resolution remote sensing image retrieval method with Gabor-CA-ResNet and a split-based deep feature transform network. The main contributions include two points. (1) For the complex texture, diverse scales, and special viewing angles of remote sensing images, A Gabor-CA-ResNet network taking ResNet as the backbone network is proposed by using Gabor to represent the spatial-frequency structure of images, channel attention (CA) mechanism to obtain stronger representative and discriminative deep features. (2) A split-based deep feature transform network is designed to divide the features extracted by the Gabor-CA-ResNet network into several segments and transform them separately for reducing the dimensionality and the storage space of deep features significantly. The experimental results on UCM, WHU-RS, RSSCN7, and AID datasets show that, compared with the state-of-the-art methods, our method can obtain competitive performance, especially for remote sensing images with rare targets and complex textures.


Sign in / Sign up

Export Citation Format

Share Document