3D Reconstruction with Multi-view Texture Mapping

Author(s):  
Xiaodan Ye ◽  
Lianghao Wang ◽  
Dongxiao Li ◽  
Ming Zhang
2021 ◽  
Vol 12 (1) ◽  
pp. 206-218
Author(s):  
Victor Gouveia de M. Lyra ◽  
Adam H. M. Pinto ◽  
Gustavo C. R. Lima ◽  
João Paulo Lima ◽  
Veronica Teichrieb ◽  
...  

With the growth of access to faster computers and more powerful cameras, the 3D reconstruction of objects has become one of the public's main topics of research and demand. This task is vigorously applied in creating virtual environments, creating object models, and other activities. One of the techniques for obtaining 3D features is photogrammetry, mapping objects and scenarios using only images. However, this process is very costly and can be pretty time-consuming for large datasets. This paper proposes a robust, efficient reconstruction pipeline with a low runtime in batch processing and permissive code. It is even possible to commercialize it without the need to keep the code open. We mix an improved structure from motion algorithm and a recurrent multi-view stereo reconstruction. We also use the Point Cloud Library for normal estimation, surface reconstruction, and texture mapping. We compare our results with state-of-the-art techniques using benchmarks and our datasets. The results showed a decrease of 69.4% in the average execution time, with high quality but a greater need for more images to achieve complete reconstruction.


Author(s):  
J. Xiong ◽  
S. Zhong ◽  
L. Zheng

This paper presents an automatic three-dimensional reconstruction method based on multi-view stereo vision for the Mogao Grottoes. 3D digitization technique has been used in cultural heritage conservation and replication over the past decade, especially the methods based on binocular stereo vision. However, mismatched points are inevitable in traditional binocular stereo matching due to repeatable or similar features of binocular images. In order to reduce the probability of mismatching greatly and improve the measure precision, a portable four-camera photographic measurement system is used for 3D modelling of a scene. Four cameras of the measurement system form six binocular systems with baselines of different lengths to add extra matching constraints and offer multiple measurements. Matching error based on epipolar constraint is introduced to remove the mismatched points. Finally, an accurate point cloud can be generated by multi-images matching and sub-pixel interpolation. Delaunay triangulation and texture mapping are performed to obtain the 3D model of a scene. The method has been tested on 3D reconstruction several scenes of the Mogao Grottoes and good results verify the effectiveness of the method.


2021 ◽  
Vol 24 (3) ◽  
pp. 485-504
Author(s):  
Alexander Sergeevich Tarasov ◽  
Vlada Vladimirovna Kugurakova

This article focuses on improving the 3D reconstruction of a human model from a single pixel-aligned implicit function image presented by FaceBook Research. The drawbacks of the method are revealed, associated with limiting the quality of the original image, recommendations are presented to avoid its incorrect operation, and approaches to improve the original model are proposed, which increase the identity of the resulting model by 1.33 times. We also worked out the tactics of subsequent texture mapping and implementation of a set of animations.


Author(s):  
Yanping Fu ◽  
Qingan Yan ◽  
Long Yang ◽  
Jie Liao ◽  
Chunxia Xiao

2021 ◽  
Vol 11 (17) ◽  
pp. 7961
Author(s):  
Ning Lv ◽  
Chengyu Wang ◽  
Yujing Qiao ◽  
Yongde Zhang

The 3D printing process lacks real-time inspection, which is still an open-loop manufacturing process, and the molding accuracy is low. Based on the 3D reconstruction theory of machine vision, in order to meet the applicability requirements of 3D printing process detection, a matching fusion method is proposed. The fast nearest neighbor (FNN) method is used to search matching point pairs. The matching point information of FFT-SIFT algorithm based on fast Fourier transform is superimposed with the matching point information of AKAZE algorithm, and then fused to obtain more dense feature point matching information and rich edge feature information. Combining incremental SFM algorithm with global SFM algorithm, an integrated SFM sparse point cloud reconstruction method is developed. The dense point cloud is reconstructed by PMVs algorithm, the point cloud model is meshed by Delaunay triangulation, and then the accurate 3D reconstruction model is obtained by texture mapping. The experimental results show that compared with the classical SIFT algorithm, the speed of feature extraction is increased by 25.0%, the number of feature matching is increased by 72%, and the relative error of 3D reconstruction results is about 0.014%, which is close to the theoretical error.


2010 ◽  
Vol 20-23 ◽  
pp. 487-492 ◽  
Author(s):  
Ze Tao Jiang ◽  
Qing Hui Xiao ◽  
Ling Hong Zhu

A new feature points extraction method is presented, which consider pixel as hexagonal. The method quasi increases the density of image pixel, expands the dynamic range of feature point extraction, increases the number of the features and resolves the problem of deformation of reconstruction which was leaded by lack of feature points. Firstly, the method was successful applied to sift operator of features extraction in this paper and then use dense stereo matching method to find the matching point of the image sequences. Secondly, through the RANSAC method to eliminate mistake matches, and by the camera matrix, calculate the corresponding points’ three-dimensional coordinates of space. Finally, the 3D model can be established through the partition merging triangulation method and texture mapping. Experimental results show that this method can get more accurate matches pairs and achieve a satisfactory effect of 3D reconstruction.


Author(s):  
Dimitrios S. Alexiadis ◽  
Dimitrios Zarpalas ◽  
Petros Daras

Author(s):  
Peng Li ◽  
Ming Tang ◽  
Ke Ding ◽  
Xiaojun Wu ◽  
Yunhui Liu

AbstractIn minimally invasive surgery, the primary surgeon requires an assistant to hold an endoscope to obtain visual information from the body cavity. However, the two-dimensional images acquired by endoscopy lack depth information. Future automatic robotic surgeries need three-dimensional information of the target area. This paper presents a method to reconstruct a 3D model of soft tissues from image sequences acquired from a robotic camera holder. In this algorithm, a sparse reconstruction module based on the SIFT and SURF features is designed, and a multilevel feature matching strategy is proposed to improve the algorithm efficiency. To recover the realistic effect of the soft-tissue model, a complete 3D reconstruction algorithm is implemented, including densification, meshing of the point cloud and texture mapping reconstruction. During the texture reconstruction stage, a mathematical model is proposed to achieve the repair of texture seams. To verify the feasibility of the proposed method, we use a collaborative manipulator (AUBO i5) with a mounted camera to mimic an assistant surgeon holding an endoscope. To satisfy a pivotal constraint imposed by the remote center of motion (RCM), a kinematic algorithm of the manipulator is implemented, and the primary surgeon is provided with a voice control interface to control the directions of the camera with. We conducted an experiment to show a 3D reconstruction of soft tissue by the proposed method and the manipulator, which indicates that the manipulator works as a robotic assistant which can hold a camera to provide abundant information in the surgery.


2020 ◽  
Vol 12 (16) ◽  
pp. 2521
Author(s):  
Shirui Hu ◽  
Zhiyuan Li ◽  
Shaohua Wang ◽  
Mingyao Ai ◽  
Qingwu Hu

3D reconstruction of culture artifacts has great potential in digital heritage documentation and protection. Choosing the proper images for texture mapping from multi-view images is a major challenge for high precision and high quality 3D reconstruction of culture artifacts. In this study, a texture selection approach, considering both the geometry and radiation quality for 3D reconstruction of cultural artifacts while using multi-view dense matching is proposed. First, a Markov random field (MRF) method is presented to select images from the best angle of view among texture image sets. Then, an image radiation quality evaluation model is proposed in the virtue of a multiscale Tenengrad definition and brightness detection to eliminate fuzzy and overexposed textures. Finally, the selected textures are mapped to the 3D model under the mapping parameters of the multi-view dense matching and a semi-automatic texture mapping is executed on the 3DMax MudBox platform. Experimental results with two typical cultural artifacts data sets (bronze wares and porcelain) show that the proposed method can reduce abnormal exposure or fuzzy images to yield high quality 3D model of cultural artifacts.


2021 ◽  
Vol 13 (17) ◽  
pp. 3458
Author(s):  
Chong Yang ◽  
Fan Zhang ◽  
Yunlong Gao ◽  
Zhu Mao ◽  
Liang Li ◽  
...  

With the progress of photogrammetry and computer vision technology, three-dimensional (3D) reconstruction using aerial oblique images has been widely applied in urban modelling and smart city applications. However, state-of-the-art image-based automatic 3D reconstruction methods cannot effectively handle the unavoidable geometric deformation and incorrect texture mapping problems caused by moving cars in a city. This paper proposes a method to address this situation and prevent the influence of moving cars on 3D modelling by recognizing moving cars and combining the recognition results with a photogrammetric 3D modelling procedure. Through car detection using a deep learning method and multiview geometry constraints, we can analyse the state of a car’s movement and apply a proper preprocessing method to the geometrically model generation and texture mapping steps of 3D reconstruction pipelines. First, we apply the traditional Mask R-CNN object detection method to detect cars from oblique images. Then, a detected car and its corresponding image patch calculated by the geometry constraints in the other view images are used to identify the moving state of the car. Finally, the geometry and texture information corresponding to the moving car will be processed according to its moving state. Experiments on three different urban datasets demonstrate that the proposed method is effective in recognizing and removing moving cars and can repair the geometric deformation and error texture mapping problems caused by moving cars. In addition, the methods proposed in this paper can be applied to eliminate other moving objects in 3D modelling applications.


Sign in / Sign up

Export Citation Format

Share Document