Visual enhancement of single-view 3D point cloud reconstruction

Author(s):  
Guiju Ping ◽  
Mahdi Abolfazli Esfahani ◽  
Jiaying Chen ◽  
Han Wang
2021 ◽  
Vol 34 (1) ◽  
Author(s):  
Peng Jin ◽  
Shaoli Liu ◽  
Jianhua Liu ◽  
Hao Huang ◽  
Linlin Yang ◽  
...  

AbstractIn recent years, addressing ill-posed problems by leveraging prior knowledge contained in databases on learning techniques has gained much attention. In this paper, we focus on complete three-dimensional (3D) point cloud reconstruction based on a single red-green-blue (RGB) image, a task that cannot be approached using classical reconstruction techniques. For this purpose, we used an encoder-decoder framework to encode the RGB information in latent space, and to predict the 3D structure of the considered object from different viewpoints. The individual predictions are combined to yield a common representation that is used in a module combining camera pose estimation and rendering, thereby achieving differentiability with respect to imaging process and the camera pose, and optimization of the two-dimensional prediction error of novel viewpoints. Thus, our method allows end-to-end training and does not require supervision based on additional ground-truth (GT) mask annotations or ground-truth camera pose annotations. Our evaluation of synthetic and real-world data demonstrates the robustness of our approach to appearance changes and self-occlusions, through outperformance of current state-of-the-art methods in terms of accuracy, density, and model completeness.


2021 ◽  
Vol 12 (1) ◽  
pp. 395
Author(s):  
Ying Wang ◽  
Ki-Young Koo

The 3D point cloud reconstruction from photos taken by an unmanned aerial vehicle (UAV) is a promising tool for monitoring and managing risks of cut-slopes. However, surface changes on cut-slopes are likely to be hidden by seasonal vegetation variations on the cut-slopes. This paper proposes a vegetation removal method for 3D reconstructed point clouds using (1) a 2D image segmentation deep learning model and (2) projection matrices available from photogrammetry. For a given point cloud, each 3D point of it is reprojected into the image coordinates by the projection matrices to determine if it belongs to vegetation or not using the 2D image segmentation model. The 3D points belonging to vegetation in the 2D images are deleted from the point cloud. The effort to build a 2D image segmentation model was significantly reduced by using U-Net with the dataset prepared by the colour index method complemented by manual trimming. The proposed method was applied to a cut-slope in Doam Dam in South Korea, and showed that vegetation from the two point clouds of the cut-slope at winter and summer was removed successfully. The M3C2 distance between the two vegetation-removed point clouds showed a feasibility of the proposed method as a tool to reveal actual change of cut-slopes without the effect of vegetation.


2016 ◽  
Vol 8 (1) ◽  
pp. 26-31 ◽  
Author(s):  
Francesca Murgia ◽  
Cristian Perra ◽  
Daniele Giusto

2013 ◽  
Vol 64 (9) ◽  
pp. 1099-1114 ◽  
Author(s):  
Thomas Hoegg ◽  
Damien Lefloch ◽  
Andreas Kolb

Sign in / Sign up

Export Citation Format

Share Document