An Approach to Automatic Great-Scene 3D Reconstruction Based on UAV Sequence Images

2012 ◽  
Vol 229-231 ◽  
pp. 2294-2297
Author(s):  
Zi Ming Xiong ◽  
Gang Wan

In this paper, we propose an approach to automatic great-scene 3D reconstruction based on UAV sequence images. In this method, Harris feature point and SIFT feature vector is used to distill image feature, achieving images match; quasi-perspective projection model and factorization is employed to calibrate the uncalibrated image sequences automatically; Efficient suboptimal solutions to the optimal triangulation is plied to obtain the coordinate of 3D points; quasi-dense diffusing algorithm is bestowed to make 3D point denseness; the algorithm of bundle adjustment is taken to improve the precision of 3D points; the approach of Possion surface reconstruction is used to make 3D points gridded. This paper introduces the theory and technology of computer vision into great-scene 3D reconstruction, provides a new way for the construction of 3D scene, and provides a new thinking for the appliance of UAV sequence images.

10.14311/1225 ◽  
2010 ◽  
Vol 50 (4) ◽  
Author(s):  
P. Faltin ◽  
A. Behrens

The most common medical diagnostic method for urinary bladder cancer is cystoscopy. This inspection of the bladder is performed by a rigid endoscope, which is usually guided close to the bladder wall. This causes a very limited field of view; difficulty of navigation is aggravated by the usage of angled endoscopes. These factors cause difficulties in orientation and visual control. To overcome this problem, the paper presents a method for extracting 3D information from uncalibrated endoscopic image sequences and for reconstructing the scene content. The method uses the SURF-algorithm to extract features from the images and relates the images by advanced matching. To stabilize the matching, the epipolar geometry is extracted for each image pair using a modified RANSAC-algorithm. Afterwards these matched point pairs are used to generate point triplets over three images and to describe the trifocal geometry. The 3D scene points are determined by applying triangulation to the matched image points. Thus, these points are used to generate a projective 3D reconstruction of the scene, and provide the first step for further metric reconstructions.


Author(s):  
M. Peng ◽  
W. Wan ◽  
Y. Xing ◽  
Y. Wang ◽  
Z. Liu ◽  
...  

RGB-D camera allows the capture of depth and color information at high data rates, and this makes it possible and beneficial integrate depth and image sequences for planetary rover mapping. The proposed mapping method consists of three steps. First, the strict projection relationship among 3D space, depth data and visual texture data is established based on the imaging principle of RGB-D camera, then, an extended bundle adjustment (BA) based SLAM method with integrated 2D and 3D measurements is applied to the image network for high-precision pose estimation. Next, as the interior and exterior elements of RGB images sequence are available, dense matching is completed with the CMPMVS tool. Finally, according to the registration parameters after ICP, the 3D scene from RGB images can be registered to the 3D scene from depth images well, and the fused point cloud can be obtained. Experiment was performed in an outdoor field to simulate the lunar surface. The experimental results demonstrated the feasibility of the proposed method.


2012 ◽  
Vol 38 (9) ◽  
pp. 1428 ◽  
Author(s):  
Xin LIU ◽  
Feng-Mei SUN ◽  
Zhan-Yi HU

Open Physics ◽  
2018 ◽  
Vol 16 (1) ◽  
pp. 1033-1045
Author(s):  
Guodong Zhou ◽  
Huailiang Zhang ◽  
Raquel Martínez Lucas

Abstract Aiming at the excellent descriptive ability of SURF operator for local features of images, except for the shortcoming of global feature description ability, a compressed sensing image restoration algorithm based on improved SURF operator is proposed. The SURF feature vector set of the image is extracted, and the vector set data is reduced into a single high-dimensional feature vector by using a histogram algorithm, and then the image HSV color histogram is extracted.MSA image decomposition algorithm is used to obtain sparse representation of image feature vectors. Total variation curvature diffusion method and Bayesian weighting method perform image restoration for data smoothing feature and local similarity feature of texture part respectively. A compressed sensing image restoration model is obtained by using Schatten-p norm, and image color supplement is performed on the model. The compressed sensing image is iteratively solved by alternating optimization method, and the compressed sensing image is restored. The experimental results show that the proposed algorithm has good restoration performance, and the restored image has finer edge and texture structure and better visual effect.


Author(s):  
J. Unger ◽  
F. Rottensteiner ◽  
C. Heipke

A hybrid bundle adjustment is presented that allows for the integration of a generalised building model into the pose estimation of image sequences. These images are captured by an Unmanned Aerial System (UAS) equipped with a camera flying in between the buildings. The relation between the building model and the images is described by distances between the object coordinates of the tie points and building model planes. Relations are found by a simple 3D distance criterion and are modelled as fictitious observations in a Gauss-Markov adjustment. The coordinates of model vertices are part of the adjustment as directly observed unknowns which allows for changes in the model. Results of first experiments using a synthetic and a real image sequence demonstrate improvements of the image orientation in comparison to an adjustment without the building model, but also reveal limitations of the current state of the method.


2011 ◽  
Vol 383-390 ◽  
pp. 5193-5199 ◽  
Author(s):  
Jian Ying Yuan ◽  
Xian Yong Liu ◽  
Zhi Qiang Qiu

In optical measuring system with a handheld digital camera, image points matching is very important for 3-dimensional(3D) reconstruction. The traditional matching algorithms are usually based on epipolar geometry or multi-base lines. Mistaken matching points can not be eliminated by epipolar geometry and many matching points will be lost by multi-base lines. In this paper, a robust algorithm is presented to eliminate mistaken matching feature points in the process of 3D reconstruction from multiple images. The algorithm include three steps: (1) pre-matching the feature points using constraints of epipolar geometry and image topological structure firstly; (2) eliminating the mistaken matching points by the principle of triangulation in multi-images; (3) refining camera external parameters by bundle adjustment. After the external parameters of every image refined, repeat step (1) to step (3) until all the feature points been matched. Comparative experiments with real image data have shown that mistaken matching feature points can be effectively eliminated, and nearly no matching points have been lost, which have a better performance than traditonal matching algorithms do.


Sign in / Sign up

Export Citation Format

Share Document