scholarly journals A Robust Invariant Local Feature Matching Method for Changing Scenes

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Di Wang ◽  
Hongying Zhang ◽  
Yanhua Shao

The precise evaluation of camera position and orientation is a momentous procedure of most machine vision tasks, especially visual localization. Aiming at the shortcomings of local features of dealing with changing scenes and the problem of realizing a robust end-to-end network that worked from feature detection to matching, an invariant local feature matching method for changing scene image pairs is proposed, which is a network that integrates feature detection, descriptor constitution, and feature matching. In the feature point detection and descriptor construction stage, joint training is carried out based on a neural network. In the feature point extraction and descriptor construction stage, joint training is carried out based on a neural network. To obtain local features with solid robustness to viewpoint and illumination changes, the Vector of Locally Aggregated Descriptors based on Neural Network (NetVLAD) module is introduced to compute the degree of correlation of description vectors from one image to another counterpart. Then, to enhance the relationship between relevant local features of image pairs, the attentional graph neural network (AGNN) is introduced, and the Sinkhorn algorithm is used to match them; finally, the local feature matching results between image pairs are output. The experimental results show that, compared with the existed algorithms, the proposed method enhances the robustness of local features of varying sights, performs better in terms of homography estimation, matching precision, and recall, and when meeting the requirements of the visual localization system to the environment, the end-to-end network tasks can be realized.

Author(s):  
Suresha .M ◽  
. Sandeep

Local features are of great importance in computer vision. It performs feature detection and feature matching are two important tasks. In this paper concentrates on the problem of recognition of birds using local features. Investigation summarizes the local features SURF, FAST and HARRIS against blurred and illumination images. FAST and Harris corner algorithm have given less accuracy for blurred images. The SURF algorithm gives best result for blurred image because its identify strongest local features and time complexity is less and experimental demonstration shows that SURF algorithm is robust for blurred images and the FAST algorithms is suitable for images with illumination.


Automatic image registration (IR) is very challenging and very important in the field of hyperspectral remote sensing data. Efficient autonomous IR method is needed with high precision, fast, and robust. A key operation of IR is to align the multiple images in single co-ordinate system for extracting and identifying variation between images considered. In this paper, presented a feature descriptor by combining features from both Feature from Accelerated Segment Test (FAST) and Binary Robust Invariant Scalable Key point (BRISK). The proposed hybrid invariant local features (HILF) descriptor extract useful and similar feature sets from reference and source images. The feature matching method allows finding precise relationship or matching among two feature sets. An experimental analysis described the outcome BRISK, FASK and proposed HILF in terms of inliers ratio and repeatability evaluation metrics.


Author(s):  
Hongmin Liu ◽  
Hongya Zhang ◽  
Zhiheng Wang ◽  
Yiming Zheng

For images with distortions or repetitive patterns, the existing matching methods usually work well just on one of the two kinds of images. In this paper, we present novel triangle guidance and constraints (TGC)-based feature matching method, which can achieve good results on both kinds of images. We first extract stable matched feature points and combine these points into triangles as the initial matched triangles, and triangles combined by feature points are as the candidates to be matched. Then, triangle guidance based on the connection relationship via the shared feature point between the matched triangles and the candidates is defined to find the potential matching triangles. Triangle constraints, specially the location of a vertex relative to the inscribed circle center of the triangle, the scale represented by the ratio of corresponding side lengths of two matching triangles and the included angles between the sides of two triangles with connection relationship, are subsequently used to verify the potential matches and obtain the correct ones. Comparative experiments show that the proposed TGC can increase the number of the matched points with high accuracy under various image transformations, especially more effective on images with distortions or repetitive patterns due to the fact that the triangular structure are not only stable to image transformations but also provides more geometric constraints.


10.14311/1219 ◽  
2010 ◽  
Vol 50 (4) ◽  
Author(s):  
A. Behrens ◽  
H. Röllinger

In many algorithms the registration of image pairs is done by feature point matching. After the feature detection is performed, all extracted interest points are usually used for the registration process without further feature point distribution analysis. However, in the case of small and sparse sets of feature points of fixed size, suitable for real-time image mosaicking algorithms, a uniform spatial feature distribution across the image becomes relevant. Thus, in this paper we discuss and analyze algorithms which provide different spatial point distributions from a given set of SURF features. The evaluations show that a more uniform spatial distribution of the point matches results in lower image registration errors, and is thus more beneficial for fast image mosaicking algorithms.


2019 ◽  
Vol 2019 ◽  
pp. 1-15
Author(s):  
Buhai Shi ◽  
Qingming Zhang ◽  
Haibo Xu

This paper presents a geometrical-information-assisted approach for matching local features. With the aid of Bayes’ theorem, it is found that the posterior confidence of matched features can be improved by introducing global geometrical information given by distances between feature points. Based on this result, we work out an approach to obtain the geometrical information and apply it to assist matching features. The pivotal techniques in this paper include (1) exploiting elliptic parameters of feature descriptors to estimate transformations that map feature points in images to points in an assumed plane; (2) projecting feature points to the assumed plane and finding a reliable referential point in it; (3) computing differences of the distances between the projected points and the referential point. Our new approach employs these differences to assist matching features, reaching better performance than the nearest neighbor-based approach in precision versus the number of matched features.


2011 ◽  
Vol 181-182 ◽  
pp. 37-42
Author(s):  
Xin Yu Li ◽  
Dong Yi Chen

Tracking and registration of camera and object is one of the most important issues in Augmented Reality (AR) systems. Markerless visual tracking technologies with image feature are used in many AR applications. Feature point based neural network image matching method has attracted considerable attention in recent years. This paper proposes an approach to feature point correspondence of image sequence based on transient chaotic neural networks. Rotation and scale invariant features are extracted from images firstly, and then transient chaotic neural network is used to perform global feature matching and perform the initialization phase of the tracking. Experimental results demonstrate the efficiency and the effectiveness of the proposed method.


2018 ◽  
Vol 246 ◽  
pp. 03011 ◽  
Author(s):  
Chengge Gu ◽  
Jianying Bao ◽  
Haonan Sang ◽  
Jinqiua Mo

Object recognition has drawn great attention in industrial application especially in automated feeding and assembling, for it can greatly improve the line flexibility and save cost. In this paper, a simple but effective method for planar object recognition is presented. This method can deal with objects under complex conditions like occlusion and clutter. The method generates object pose hypothesis from the prediction agreements of different local features in the object. There are two stages contained in our method, offline stage and online stage. At offline stage, the representative parts in the object are chosen as its local features and the recognition template is made. Next at online stage, the matches of different local features are found in the input image. Then the prediction agreements are searched among them in order to generate the final object pose hypothesis. A thin planar object recognition experiment has been conducted under occluded conditions and an improved result is presented compared with the traditional overall matching method.


2021 ◽  
Author(s):  
Aikui Tian ◽  
Kangtao Wang ◽  
liye zhang ◽  
Bingcai Wei

Abstract Aiming at the problem of inaccurate extraction of feature points by the traditional image matching method, low robustness, and problems such as diffculty in inentifying feature points in area with poor texture. This paper proposes a new local image feature matching method, which replaces the traditional sequential image feature detection, description and matching steps. First, extract the coarse features with a resolution of 1/8 from the original image, then tile to a one-dimensional vector plus the positional encoding, feed them to the self-attention layer and cross-attention layer in the Transformer module, and finally get through the Differentiable Matching Layer and confidence matrix, after setting the threshold and the mutual closest standard, a Coarse-Level matching prediction is obtained. Secondly the fine matching is refined at the Fine-level match, after the Fine-level match is established, the image overlapped area is aligned by transforming the matrix to a unified coordinate, and finally the image is fused by the weighted fusion algorithm to realize the unification of seamless mosaic of images. This paper uses the self-attention layer and cross-attention layer in Transformers to obtain the feature descriptor of the image. Finally, experiments show that in terms of feature point extraction, LoFTR algorithm is more accurate than the traditional SIFT algorithm in both low-texture regions and regions with rich textures. At the same time, the image mosaic effect obtained by this method is more accurate than that of the traditional classic algorithms, the experimental effect is more ideal.


Sign in / Sign up

Export Citation Format

Share Document