3d point cloud reconstruction
Recently Published Documents


TOTAL DOCUMENTS

35
(FIVE YEARS 27)

H-INDEX

5
(FIVE YEARS 4)

Author(s):  
Guiju Ping ◽  
Mahdi Abolfazli Esfahani ◽  
Jiaying Chen ◽  
Han Wang

2021 ◽  
Vol 12 (1) ◽  
pp. 395
Author(s):  
Ying Wang ◽  
Ki-Young Koo

The 3D point cloud reconstruction from photos taken by an unmanned aerial vehicle (UAV) is a promising tool for monitoring and managing risks of cut-slopes. However, surface changes on cut-slopes are likely to be hidden by seasonal vegetation variations on the cut-slopes. This paper proposes a vegetation removal method for 3D reconstructed point clouds using (1) a 2D image segmentation deep learning model and (2) projection matrices available from photogrammetry. For a given point cloud, each 3D point of it is reprojected into the image coordinates by the projection matrices to determine if it belongs to vegetation or not using the 2D image segmentation model. The 3D points belonging to vegetation in the 2D images are deleted from the point cloud. The effort to build a 2D image segmentation model was significantly reduced by using U-Net with the dataset prepared by the colour index method complemented by manual trimming. The proposed method was applied to a cut-slope in Doam Dam in South Korea, and showed that vegetation from the two point clouds of the cut-slope at winter and summer was removed successfully. The M3C2 distance between the two vegetation-removed point clouds showed a feasibility of the proposed method as a tool to reveal actual change of cut-slopes without the effect of vegetation.


2021 ◽  
Vol 34 (1) ◽  
Author(s):  
Peng Jin ◽  
Shaoli Liu ◽  
Jianhua Liu ◽  
Hao Huang ◽  
Linlin Yang ◽  
...  

AbstractIn recent years, addressing ill-posed problems by leveraging prior knowledge contained in databases on learning techniques has gained much attention. In this paper, we focus on complete three-dimensional (3D) point cloud reconstruction based on a single red-green-blue (RGB) image, a task that cannot be approached using classical reconstruction techniques. For this purpose, we used an encoder-decoder framework to encode the RGB information in latent space, and to predict the 3D structure of the considered object from different viewpoints. The individual predictions are combined to yield a common representation that is used in a module combining camera pose estimation and rendering, thereby achieving differentiability with respect to imaging process and the camera pose, and optimization of the two-dimensional prediction error of novel viewpoints. Thus, our method allows end-to-end training and does not require supervision based on additional ground-truth (GT) mask annotations or ground-truth camera pose annotations. Our evaluation of synthetic and real-world data demonstrates the robustness of our approach to appearance changes and self-occlusions, through outperformance of current state-of-the-art methods in terms of accuracy, density, and model completeness.


2021 ◽  
Vol 13 (17) ◽  
pp. 3534
Author(s):  
Shanshan Feng ◽  
Yun Lin ◽  
Yanping Wang ◽  
Fei Teng ◽  
Wen Hong

3D reconstruction has raised much interest in the field of CSAR. However, three dimensional imaging results with single pass CSAR data reveals that the 3D resolution of the system is poor for anisotropic scatterers. According to the imaging mechanism of CSAR, different targets located on the same iso-range line in the zero doppler plane fall into the same cell while for the same target point, imaging point will fall into the different positions at different aspect angles. In this paper, we proposed a method for 3D point cloud reconstruction using projections on 2D sub-aperture images. The target and background in the sub-aperture images are separated and binarized. For a projection point of target, given a series of offsets, the projection point will be mapped inversely to the 3D mesh along the iso-range line. We can obtain candidate points of the target. The intersection of iso-range lines can be regarded as voting process. For a candidate, the more times of intersection, the higher the number of votes, and the candidate point will be reserved. This fully excavates the information contained in the angle dimension of CSAR. The proposed approach is verified by the Gotcha Volumetric SAR Data Set.


2021 ◽  
Author(s):  
Dongyi Yao ◽  
Fengqi Li ◽  
Yi Wang ◽  
Hong Yang ◽  
Xiuyun Li

Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1763
Author(s):  
Minsung Sung ◽  
Jason Kim ◽  
Hyeonwoo Cho ◽  
Meungsuk Lee ◽  
Son-Cheol Yu

This paper proposes a sonar-based underwater object classification method for autonomous underwater vehicles (AUVs) by reconstructing an object’s three-dimensional (3D) geometry. The point cloud of underwater objects can be generated from sonar images captured while the AUV passes over the object. Then, a neural network can predict the class given the generated point cloud. By reconstructing the 3D shape of the object, the proposed method can classify the object accurately through a straightforward training process. We verified the proposed method by performing simulations and field experiments.


Sign in / Sign up

Export Citation Format

Share Document