Modelling of feature matching performance on correlated speckle images

Author(s):  
Victor Wang ◽  
Michael Hayes
Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1839
Author(s):  
Yutong Zhang ◽  
Jianmei Song ◽  
Yan Ding ◽  
Yating Yuan ◽  
Hua-Liang Wei

Fisheye images with a far larger Field of View (FOV) have severe radial distortion, with the result that the associated image feature matching process cannot achieve the best performance if the traditional feature descriptors are used. To address this challenge, this paper reports a novel distorted Binary Robust Independent Elementary Feature (BRIEF) descriptor for fisheye images based on a spherical perspective model. Firstly, the 3D gray centroid of feature points is designed, and the position and direction of the feature points on the spherical image are described by a constructed feature point attitude matrix. Then, based on the attitude matrix of feature points, the coordinate mapping relationship between the BRIEF descriptor template and the fisheye image is established to realize the computation associated with the distorted BRIEF descriptor. Four experiments are provided to test and verify the invariance and matching performance of the proposed descriptor for a fisheye image. The experimental results show that the proposed descriptor works well for distortion invariance and can significantly improve the matching performance in fisheye images.


Author(s):  
L. Chen ◽  
F. Rottensteiner ◽  
C. Heipke

Abstract. Matching images containing large viewpoint and viewing direction changes, resulting in large perspective differences, still is a very challenging problem. Affine shape estimation, orientation assignment and feature description algorithms based on detected hand crafted features have shown to be error prone. In this paper, affine shape estimation, orientation assignment and description of local features is achieved through deep learning. Those three modules are trained based on loss functions optimizing the matching performance of input patch pairs. The trained descriptors are first evaluated on the Brown dataset (Brown et al., 2011), a standard descriptor performance benchmark. The whole pipeline is then tested on images of small blocks acquired with an aerial penta camera, to compute image orientation. The results show that learned features perform significantly better than alternatives based on hand crafted features.


2017 ◽  
Vol 70 (5) ◽  
pp. 1133-1152
Author(s):  
Zeyu Li ◽  
Jinling Wang ◽  
Kai Chen ◽  
Yu Sun

Vision navigation using environmental features has been widely applied when satellite signals are not available. However, the matching performance of traditional environmental features such as keypoints degrades significantly in weakly textured areas, deteriorating navigation performance. Further, the user needs to evaluate and assure feature matching quality. In this paper, a new feature, named Line Segment Intersection Feature (LSIF), is proposed to solve the availability problem in weakly textured regions. Then a combined descriptor involving global structure and local gradient is designed for similarity comparison. To achieve reliable point-to-point matching, a coarse-to-fine matching algorithm is developed, which improves the performance of the point set matching algorithm. Finally, a framework of matching quality evaluation is proposed to assure matching performance. Through the comparison, it is demonstrated that the proposed new feature has superior overall performance especially on correctly matched numbers of keypoints and matching correctness. Also, using real image sets with weak texture, it is shown that the proposed LSIF can achieve improved navigation solutions with high continuity and accuracy.


Author(s):  
Vijay Chandrasekhar ◽  
Gabriel Takacs ◽  
David M. Chen ◽  
Sam S. Tsai ◽  
Mina Makar ◽  
...  

2013 ◽  
Vol 373-375 ◽  
pp. 536-540 ◽  
Author(s):  
Jing Li ◽  
Tao Yang

Robust and efficient indistinctive feature matching and outliers removal is an essential problem in many computer vision applications. In this paper we present a simple and fast algorithm named as LDGTH (Local Descriptor Generalized Hough Transform) to handle this problem. The main characteristics of the proposed method include: (1) A novel local descriptor generalized hough transform framework is presented in which the local geometric characteristics of invariant feature descriptors are fused together as a global constraint for feature correspondence verification. (2) Different from standard generalized hough transform, our approach greatly reduces the computational and storage requirements of parameter space through taking advantage of the invariant feature correspondences. (3) The proposed algorithm can be seamlessly embedded into the existing image matching framework, and significantly improve the image matching performance both in speed and robustness in challenge conditions. In the experiment we use both synthetic image data and real world data with high outliers ratio and severe changes in view point, scale, illumination, image blur, compression and noises to evaluate the proposed method, and the results demonstrate that our approach achieves achieves faster and better matching performance compared to the traditional algorithms.


Robotics ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 85
Author(s):  
Annika Hoffmann

The detection and description of features is one basic technique for many visual robot navigation systems in both indoor and outdoor environments. Matched features from two or more images are used to solve navigation problems, e.g., by establishing spatial relationships between different poses in which the robot captured the images. Feature detection and description is particularly challenging in outdoor environments, and widely used grayscale methods lead to high numbers of outliers. In this paper, we analyze the use of color information for keypoint detection and description. We consider grayscale and color-based detectors and descriptors, as well as combinations of them, and evaluate their matching performance. We demonstrate that the use of color information for feature detection and description markedly increases the matching performance.


2020 ◽  
Vol 8 (6) ◽  
pp. 449
Author(s):  
Iman Abaspur Kazerouni ◽  
Gerard Dooly ◽  
Daniel Toal

Feature extraction and matching is a key component in image stitching and a critical step in advancing image reconstructions, machine vision and robotic perception algorithms. This paper presents a fast and robust underwater image mosaicking system based on (2D)2PCA and A-KAZE key-points extraction and optimal seam-line methods. The system utilizes image enhancement as a preprocessing step to improve quality and allow for greater keyframe extraction and matching performance, leading to better quality mosaicking. The application focus of this paper is underwater imaging and it demonstrates the suitability of the developed system in advanced underwater reconstructions. The results show that the proposed method can address the problems of noise, mismatching and quality issues which are typically found in underwater image datasets. The results demonstrate the proposed method as scale-invariant and show improvements in terms of processing speed and system robustness over other methods found in the literature.


Sign in / Sign up

Export Citation Format

Share Document