Camera paramters estimating and 3D reconstruction with bundle adjustment

Author(s):  
Jianying Yuan ◽  
Xianyong Liu ◽  
JianCai Hu
2011 ◽  
Vol 383-390 ◽  
pp. 5193-5199 ◽  
Author(s):  
Jian Ying Yuan ◽  
Xian Yong Liu ◽  
Zhi Qiang Qiu

In optical measuring system with a handheld digital camera, image points matching is very important for 3-dimensional(3D) reconstruction. The traditional matching algorithms are usually based on epipolar geometry or multi-base lines. Mistaken matching points can not be eliminated by epipolar geometry and many matching points will be lost by multi-base lines. In this paper, a robust algorithm is presented to eliminate mistaken matching feature points in the process of 3D reconstruction from multiple images. The algorithm include three steps: (1) pre-matching the feature points using constraints of epipolar geometry and image topological structure firstly; (2) eliminating the mistaken matching points by the principle of triangulation in multi-images; (3) refining camera external parameters by bundle adjustment. After the external parameters of every image refined, repeat step (1) to step (3) until all the feature points been matched. Comparative experiments with real image data have shown that mistaken matching feature points can be effectively eliminated, and nearly no matching points have been lost, which have a better performance than traditonal matching algorithms do.


2018 ◽  
Vol 30 (4) ◽  
pp. 660-670 ◽  
Author(s):  
Akira Shibata ◽  
Yukari Okumura ◽  
Hiromitsu Fujii ◽  
Atsushi Yamashita ◽  
Hajime Asama ◽  
...  

Structure from motion is a three-dimensional (3D) reconstruction method that uses one camera. However, the absolute scale of objects cannot be reconstructed by the conventional structure from motion method. In our previous studies, to solve this problem by using refraction, we proposed a scale reconstructible structure from motion method. In our measurement system, a refractive plate is fixed in front of a camera and images are captured through this plate. To overcome the geometrical constraints, we derived an extended essential equation by theoretically considering the effect of refraction. By applying this formula to 3D measurements, the absolute scale of an object could be obtained. However, this method was verified only by a simulation under ideal conditions, for example, by not taking into account real phenomena such as noise or occlusion, which are necessarily caused in actual measurements. In this study, to robustly apply this method to an actual measurement with real images, we introduced a novel bundle adjustment method based on the refraction effect. This optimization technique can reduce the 3D reconstruction errors caused by measurement noise in actual scenes. In particular, we propose a new error function considering the effect of refraction. By minimizing the value of this error function, accurate 3D reconstruction results can be obtained. To evaluate the effectiveness of the proposed method, experiments using both simulations and real images were conducted. The results of the simulation show that the proposed method is theoretically accurate. The results of the experiments using real images show that the proposed method is effective for real 3D measurements.


Author(s):  
Lei Zhou ◽  
Zixin Luo ◽  
Mingmin Zhen ◽  
Tianwei Shen ◽  
Shiwei Li ◽  
...  

2020 ◽  
Vol 12 (3) ◽  
pp. 351 ◽  
Author(s):  
Seyyed Meghdad Hasheminasab ◽  
Tian Zhou ◽  
Ayman Habib

Acquired imagery by unmanned aerial vehicles (UAVs) has been widely used for three-dimensional (3D) reconstruction/modeling in various digital agriculture applications, such as phenotyping, crop monitoring, and yield prediction. 3D reconstruction from well-textured UAV-based images has matured and the user community has access to several commercial and opensource tools that provide accurate products at a high level of automation. However, in some applications, such as digital agriculture, due to repetitive image patterns, these approaches are not always able to produce reliable/complete products. The main limitation of these techniques is their inability to establish a sufficient number of correctly matched features among overlapping images, causing incomplete and/or inaccurate 3D reconstruction. This paper provides two structure from motion (SfM) strategies, which use trajectory information provided by an onboard survey-grade global navigation satellite system/inertial navigation system (GNSS/INS) and system calibration parameters. The main difference between the proposed strategies is that the first one—denoted as partially GNSS/INS-assisted SfM—implements the four stages of an automated triangulation procedure, namely, imaging matching, relative orientation parameters (ROPs) estimation, exterior orientation parameters (EOPs) recovery, and bundle adjustment (BA). The second strategy— denoted as fully GNSS/INS-assisted SfM—removes the EOPs estimation step while introducing a random sample consensus (RANSAC)-based strategy for removing matching outliers before the BA stage. Both strategies modify the image matching by restricting the search space for conjugate points. They also implement a linear procedure for ROPs’ refinement. Finally, they use the GNSS/INS information in modified collinearity equations for a simpler BA procedure that could be used for refining system calibration parameters. Eight datasets over six agricultural fields are used to evaluate the performance of the developed strategies. In comparison with a traditional SfM framework and Pix4D Mapper Pro, the proposed strategies are able to generate denser and more accurate 3D point clouds as well as orthophotos without any gaps.


2016 ◽  
Vol 11 (2) ◽  
pp. 51-59 ◽  
Author(s):  
Young-Sik Shin ◽  
◽  
Yeong-jun Lee ◽  
Hyun-Taek Choi ◽  
Ayoung Kim

2012 ◽  
Vol 229-231 ◽  
pp. 2294-2297
Author(s):  
Zi Ming Xiong ◽  
Gang Wan

In this paper, we propose an approach to automatic great-scene 3D reconstruction based on UAV sequence images. In this method, Harris feature point and SIFT feature vector is used to distill image feature, achieving images match; quasi-perspective projection model and factorization is employed to calibrate the uncalibrated image sequences automatically; Efficient suboptimal solutions to the optimal triangulation is plied to obtain the coordinate of 3D points; quasi-dense diffusing algorithm is bestowed to make 3D point denseness; the algorithm of bundle adjustment is taken to improve the precision of 3D points; the approach of Possion surface reconstruction is used to make 3D points gridded. This paper introduces the theory and technology of computer vision into great-scene 3D reconstruction, provides a new way for the construction of 3D scene, and provides a new thinking for the appliance of UAV sequence images.


Author(s):  
M. Vlachos ◽  
L. Berger ◽  
R. Mathelier ◽  
P. Agrafiotis ◽  
D. Skarlatos

<p><strong>Abstract.</strong> This paper presents an investigation as to whether and how the selection of the SfM-MVS software affects the 3D reconstruction of submerged archaeological sites. Specifically, Agisoft Photoscan, VisualSFM, SURE, 3D Zephyr and Reality Capture software were used and evaluated according to their performance in 3D reconstruction using specific metrics over the reconstructed underwater scenes. It must be clarified that the scope of this study is not to evaluate specific algorithms or steps that the various software use, but to evaluate the final results and specifically the generated 3D point clouds. To address the above research issues, a dataset from the ancient shipwreck, laying at 45 meters below sea level, is used. The dataset is composed of 19 images having very small camera to object distance (1 meter), and 42 images with higher camera to object distance (3 meters) images. Using a common bundle adjustment for all 61 images, a reference point cloud resulted from the lower dataset is used to compare it with the point clouds of the higher dataset generated using the different photogrammetric packages. Following that, a comparison regarding the number of total points, cloud to cloud distances, surface roughness, surface density and a combined 3D metric was done to evaluate and see which one performed the best.</p>


Sign in / Sign up

Export Citation Format

Share Document