Image Matching Based on Representative Local Descriptors

Author(s):  
Jian Hou ◽  
Naiming Qi ◽  
Jianxin Kang
2020 ◽  
Vol 2020 (10) ◽  
pp. 313-1-313-7
Author(s):  
Raffaele Imbriaco ◽  
Egor Bondarev ◽  
Peter H.N. de With

Visual place recognition using query and database images from different sources remains a challenging task in computer vision. Our method exploits global descriptors for efficient image matching and local descriptors for geometric verification. We present a novel, multi-scale aggregation method for local convolutional descriptors, using memory vector construction for efficient aggregation. The method enables to find preliminary set of image candidate matches and remove visually similar but erroneous candidates. We deploy the multi-scale aggregation for visual place recognition on 3 large-scale datasets. We obtain a Recall@10 larger than 94% for the Pittsburgh dataset, outperforming other popular convolutional descriptors used in image retrieval and place recognition. Additionally, we provide a comparison for these descriptors on a more challenging dataset containing query and database images obtained from different sources, achieving over 77% Recall@10.


2015 ◽  
Vol 15 (3) ◽  
pp. 104-113
Author(s):  
Yingying Li ◽  
Jieqing Tan ◽  
Jinqin Zhong

Abstract The local descriptors based on a binary pattern feature have state-of-the-art distinctiveness. However, their high dimensionality resists them from matching faster and being used in a low-end device. In this paper we propose an efficient and feasible learning method to select discriminative binary patterns for constructing a compact local descriptor. In the selection, a searching tree with Branch&Bound is used instead of the exhaustive enumeration, in order to avoid tremendous computation in training. New local descriptors are constructed based on the selected patterns. The efficiency of selecting binary patterns has been confirmed by the evaluation of these new local descriptors’ performance in experiments of image matching and object recognition.


2012 ◽  
Vol 182-183 ◽  
pp. 1868-1872
Author(s):  
Jing Hou ◽  
Jin Xiang Pian ◽  
Ying Zhang ◽  
Ming Yue Wang

A new approach is presented to match two images in presenting large scale changes. The novelty of our algorithm is a hierarchical matching strategy for global region features and local descriptors, which combines the descriptive power of global features and the discriminative power of local descriptors. To predict the likely location and scale of an object, global features extracted from the segmentation regions is used in the first stage for an efficient region matching. This initial matching can be ambiguous due to the instability and unreliability of global region feature, and therefore in the later stage local descriptors are matched within each region pair to discard false positives and the final matches are filtered by RANSAC. Experiments show the effectiveness and superiority of the proposed method in comparing to other approaches.


Author(s):  
A. Olsen ◽  
J.C.H. Spence ◽  
P. Petroff

Since the point resolution of the JEOL 200CX electron microscope is up = 2.6Å it is not possible to obtain a true structure image of any of the III-V or elemental semiconductors with this machine. Since the information resolution limit set by electronic instability (1) u0 = (2/πλΔ)½ = 1.4Å for Δ = 50Å, it is however possible to obtain, by choice of focus and thickness, clear lattice images both resembling (see figure 2(b)), and not resembling, the true crystal structure (see (2) for an example of a Fourier image which is structurally incorrect). The crucial difficulty in using the information between Up and u0 is the fractional accuracy with which Af and Cs must be determined, and these accuracies Δff/4Δf = (2λu2Δf)-1 and ΔCS/CS = (λ3u4Cs)-1 (for a π/4 phase change, Δff the Fourier image period) are strongly dependent on spatial frequency u. Note that ΔCs(up)/Cs ≈ 10%, independent of CS and λ. Note also that the number n of identical high contrast spurious Fourier images within the depth of field Δz = (αu)-1 (α beam divergence) decreases with increasing high voltage, since n = 2Δz/Δff = θ/α = λu/α (θ the scattering angle). Thus image matching becomes easier in semiconductors at higher voltage because there are fewer high contrast identical images in any focal series.


2010 ◽  
Vol 22 (6) ◽  
pp. 1042-1049 ◽  
Author(s):  
Jinde Wang ◽  
Xiaoyan Li ◽  
Lidan Shou ◽  
Gang Chen

2013 ◽  
Vol 32 (11) ◽  
pp. 3157-3160
Author(s):  
Zhen-hua XUE ◽  
Ping WANG ◽  
Chu-han ZHANG ◽  
Si-jia CAI

Sign in / Sign up

Export Citation Format

Share Document