epipolar constraints
Recently Published Documents


TOTAL DOCUMENTS

22
(FIVE YEARS 6)

H-INDEX

5
(FIVE YEARS 0)

2021 ◽  
Vol 13 (10) ◽  
pp. 2017
Author(s):  
Anbang Liang ◽  
Qingquan Li ◽  
Zhipeng Chen ◽  
Dejin Zhang ◽  
Jiasong Zhu ◽  
...  

Fisheye cameras are widely used in visual localization due to the advantage of the wide field of view. However, the severe distortion in fisheye images lead to feature matching difficulties. This paper proposes an IMU-assisted fisheye image matching method called spherically optimized random sample consensus (So-RANSAC). We converted the putative correspondences into fisheye spherical coordinates and then used an inertial measurement unit (IMU) to provide relative rotation angles to assist fisheye image epipolar constraints and improve the accuracy of pose estimation and mismatch removal. To verify the performance of So-RANSAC, experiments were performed on fisheye images of urban drainage pipes and public data sets. The experimental results showed that So-RANSAC can effectively improve the mismatch removal accuracy, and its performance was superior to the commonly used fisheye image matching methods in various experimental scenarios.


Robotics ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 23
Author(s):  
Bismaya Sahoo ◽  
Mohammad Biglarbegian ◽  
William Melek

In this paper, we present a novel method for visual-inertial odometry for land vehicles. Our technique is robust to unintended, but unavoidable bumps, encountered when an off-road land vehicle traverses over potholes, speed-bumps or general change in terrain. In contrast to tightly-coupled methods for visual-inertial odometry, we split the joint visual and inertial residuals into two separate steps and perform the inertial optimization after the direct-visual alignment step. We utilize all visual and geometric information encoded in a keyframe by including the inverse-depth variances in our optimization objective, making our method a direct approach. The primary contribution of our work is the use of epipolar constraints, computed from a direct-image alignment, to correct pose prediction obtained by integrating IMU measurements, while simultaneously building a semi-dense map of the environment in real-time. Through experiments, both indoor and outdoor, we show that our method is robust to sudden spikes in inertial measurements while achieving better accuracy than the state-of-the art direct, tightly-coupled visual-inertial fusion method.


2020 ◽  
Vol 10 (24) ◽  
pp. 8851
Author(s):  
Diana-Margarita Córdova-Esparza ◽  
Juan Terven ◽  
Julio-Alejandro Romero-González ◽  
Alfonso Ramírez-Pedraza

In this work, we present a panoramic 3D stereo reconstruction system composed of two catadioptric cameras. Each one consists of a CCD camera and a parabolic convex mirror that allows the acquisition of catadioptric images. We describe the calibration approach and propose the improvement of existing deep feature matching methods with epipolar constraints. We show that the improved matching algorithm covers more of the scene than classic feature detectors, yielding broader and denser reconstructions for outdoor environments. Our system can also generate accurate measurements in the wild without large amounts of data used in deep learning-based systems. We demonstrate the system’s feasibility and effectiveness as a practical stereo sensor with real experiments in indoor and outdoor environments.


2020 ◽  
Vol 32 (15) ◽  
pp. 937-940
Author(s):  
Shuo Peng ◽  
Shaohui Zhang ◽  
Yao Hu ◽  
Qun Hao ◽  
Xuemin Cheng

Author(s):  
X. Huang ◽  
R. Qin ◽  
M. Chen

<p><strong>Abstract.</strong> Stereo dense matching has already been one of the dominant tools in 3D reconstruction of urban regions, due to its low cost and high flexibility in generating 3D points. However, the image-derived 3D points are often inaccurate around building edges, which limit its use in several vision tasks (e.g. building modelling). To generate 3D point clouds or digital surface models (DSM) with sharp boundaries, this paper integrates robustly matched lines for improving dense matching, and proposes a non-local disparity refinement of building edges through an iterative least squares plane adjustment approach. In our method, we first extract and match straight lines in images using epipolar constraints, then detect building edges from these straight lines by comparing matching results on both sides of straight lines, and finally we develop a non-local disparity refinement method through an iterative least squares plane adjustment constrained by matched straight lines to yield sharper and more accurate edges. Experiments conducted on both satellite and aerial data demonstrate that our proposed method is able to generate more accurate DSM with sharper object boundaries.</p>


Author(s):  
H. M. Mohammed ◽  
N. El-Sheimy

<p><strong>Abstract.</strong> Preliminary matching of image features is based on the distance between their descriptors. Matches are further filtered using RANSAC, or a similar method that fits the matches to a model; usually the fundamental matrix and rejects matches not belonging to that model. There are a few issues with this scheme. First, mismatches are no longer considered after RANSAC rejection. Second, RANSAC might fail to detect an accurate model if the number of outliers is significant. Third, a fundamental matrix model could be degenerate even if the matches are all inliers. To address these issues, a new method is proposed that relies on the prior knowledge of the images’ geometry, which can be obtained from the orientation sensors or a set of initial matches. Using a set of initial matches, a fundamental matrix and a global homography can be estimated. These two entities are then used with a detect-and-match strategy to gain more accurate matches. Features are detected in one image, then the locations of their correspondences in the other image are predicted using the epipolar constraints and the global homography. The feature correspondences are then corrected with template matching. Since global homography is only valid with a plane-to-plane mapping, discrepancy vectors are introduced to represent an alternative to local homographies. The method was tested on Unmanned Aerial Vehicle (UAV) images, where the images are usually taken successively, and differences in scale and orientation are not an issue. The method promises to find a well-distributed set of matches over the scene structure, especially with scenes of multiple depths. Furthermore; the number of outliers is reduced, encouraging to use a least square adjustment instead of RANSAC, to fit a non-degenerate model.</p>


Author(s):  
Zuxun Zhang ◽  
Jia’nan He ◽  
Shan Huang ◽  
Yansong Duan

Dense image matching is a basic and key point of photogrammetry and computer version. In this paper, we provide a method derived from the seed-and-grow method, whose basic procedure consists of the following: First, the seed and feature points are extracted, after which the feature points around every seed point are found in the first step of expansion. The corresponding information on these feature points needs to be determined. This is followed by the second step of expansion, in which the seed points around the feature point are found and used to estimate the possible matching patch. Finally, the matching results are refined through the traditional correlation-based method. Our proposed method operates on two frames without geometric constraints, specifically, epipolar constraints. It (1) can smoothly operate on frame, line array, natural scene, and even synthetic aperture radar (SAR) images and (2) at the same time guarantees computing efficiency as a result of the seed-and-grow concept and the computational efficiency of the correlation-based method.


Sign in / Sign up

Export Citation Format

Share Document