Fast and Adaptive Local Consensus Verification for Robust Feature Correspondence

Author(s):  
Liang Shen ◽  
Tian Jin ◽  
Xiaotao Huang ◽  
Qin Xin ◽  
Shaodi Ge ◽  
...  
Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4719
Author(s):  
Huei-Yung Lin ◽  
Yuan-Chi Chung ◽  
Ming-Liang Wang

This paper presents a novel self-localization technique for mobile robots using a central catadioptric camera. A unified sphere model for the image projection is derived by the catadioptric camera calibration. The geometric property of the camera projection model is utilized to obtain the intersections of the vertical lines and ground plane in the scene. Different from the conventional stereo vision techniques, the feature points are projected onto a known planar surface, and the plane equation is used for depth computation. The 3D coordinates of the base points on the ground are calculated using the consecutive image frames. The derivation of motion trajectory is then carried out based on the computation of rotation and translation between the robot positions. We develop an algorithm for feature correspondence matching based on the invariability of the structure in the 3D space. The experimental results obtained using the real scene images have demonstrated the feasibility of the proposed method for mobile robot localization applications.


2012 ◽  
Vol 11 (1) ◽  
pp. 25-32
Author(s):  
Yaqiong Liu ◽  
Seah Hock Soon ◽  
Ying He ◽  
Juncong Lin ◽  
Jiazhi Xia

The establishment of a good correspondence mapping is a key issue in planar animations such as image morphing and deformation. In this paper, we present a novel mapping framework for animation of complex shapes. We firstly let the user extract the outlines of the interested object and target interested area from the input images and specify some optional feature lines, and then we generate a sparse delaunay triangulation mesh taking the outlines and the feature lines of the source shape as constraints. Then we copy the topology from the source shape to the target shape to construct a valid triangulation in the target shape. After that, each triangle of this triangular mesh is further segmented into a dense mesh patch. Each mesh patch is parameterized onto a unit circle domain. With such parametrization, we can easily construct a correspondence mapping between the source patches and the corresponding target patches. Our framework can work well for various applications such as shape deformation and morphing. Pleasing results generated by our framework show that the framework works well.


Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5310
Author(s):  
Lai Kang ◽  
Yingmei Wei ◽  
Jie Jiang ◽  
Yuxiang Xie

Cylindrical panorama stitching is able to generate high resolution images of a scene with a wide field-of-view (FOV), making it a useful scene representation for applications like environmental sensing and robot localization. Traditional image stitching methods based on hand-crafted features are effective for constructing a cylindrical panorama from a sequence of images in the case when there are sufficient reliable features in the scene. However, these methods are unable to handle low-texture environments where no reliable feature correspondence can be established. This paper proposes a novel two-step image alignment method based on deep learning and iterative optimization to address the above issue. In particular, a light-weight end-to-end trainable convolutional neural network (CNN) architecture called ShiftNet is proposed to estimate the initial shifts between images, which is further optimized in a sub-pixel refinement procedure based on a specified camera motion model. Extensive experiments on a synthetic dataset, rendered photo-realistic images, and real images were carried out to evaluate the performance of our proposed method. Both qualitative and quantitative experimental results demonstrate that cylindrical panorama stitching based on our proposed image alignment method leads to significant improvements over traditional feature based methods and recent deep learning based methods for challenging low-texture environments.


Sign in / Sign up

Export Citation Format

Share Document