scholarly journals 3D Point Cloud Reconstruction Using Inversely Mapping and Voting from Single Pass CSAR Images

2021 ◽  
Vol 13 (17) ◽  
pp. 3534
Author(s):  
Shanshan Feng ◽  
Yun Lin ◽  
Yanping Wang ◽  
Fei Teng ◽  
Wen Hong

3D reconstruction has raised much interest in the field of CSAR. However, three dimensional imaging results with single pass CSAR data reveals that the 3D resolution of the system is poor for anisotropic scatterers. According to the imaging mechanism of CSAR, different targets located on the same iso-range line in the zero doppler plane fall into the same cell while for the same target point, imaging point will fall into the different positions at different aspect angles. In this paper, we proposed a method for 3D point cloud reconstruction using projections on 2D sub-aperture images. The target and background in the sub-aperture images are separated and binarized. For a projection point of target, given a series of offsets, the projection point will be mapped inversely to the 3D mesh along the iso-range line. We can obtain candidate points of the target. The intersection of iso-range lines can be regarded as voting process. For a candidate, the more times of intersection, the higher the number of votes, and the candidate point will be reserved. This fully excavates the information contained in the angle dimension of CSAR. The proposed approach is verified by the Gotcha Volumetric SAR Data Set.

2021 ◽  
Vol 34 (1) ◽  
Author(s):  
Peng Jin ◽  
Shaoli Liu ◽  
Jianhua Liu ◽  
Hao Huang ◽  
Linlin Yang ◽  
...  

AbstractIn recent years, addressing ill-posed problems by leveraging prior knowledge contained in databases on learning techniques has gained much attention. In this paper, we focus on complete three-dimensional (3D) point cloud reconstruction based on a single red-green-blue (RGB) image, a task that cannot be approached using classical reconstruction techniques. For this purpose, we used an encoder-decoder framework to encode the RGB information in latent space, and to predict the 3D structure of the considered object from different viewpoints. The individual predictions are combined to yield a common representation that is used in a module combining camera pose estimation and rendering, thereby achieving differentiability with respect to imaging process and the camera pose, and optimization of the two-dimensional prediction error of novel viewpoints. Thus, our method allows end-to-end training and does not require supervision based on additional ground-truth (GT) mask annotations or ground-truth camera pose annotations. Our evaluation of synthetic and real-world data demonstrates the robustness of our approach to appearance changes and self-occlusions, through outperformance of current state-of-the-art methods in terms of accuracy, density, and model completeness.


Agronomy ◽  
2019 ◽  
Vol 9 (10) ◽  
pp. 596 ◽  
Author(s):  
Guoxiang Sun ◽  
Xiaochan Wang

Plant morphological data are an important basis for precision agriculture and plant phenomics. The three-dimensional (3D) geometric shape of plants is complex, and the 3D morphology of a plant changes relatively significantly during the full growth cycle. In order to make high-throughput measurements of the 3D morphological data of greenhouse plants, it is necessary to frequently adjust the relative position between the sensor and the plant. Therefore, it is necessary to frequently adjust the Kinect sensor position and consequently recalibrate the Kinect sensor during the full growth cycle of the plant, which significantly increases the tedium of the multiview 3D point cloud reconstruction process. A high-throughput 3D rapid greenhouse plant point cloud reconstruction method based on autonomous Kinect v2 sensor position calibration is proposed for 3D phenotyping greenhouse plants. Two red–green–blue–depth (RGB-D) images of the turntable surface are acquired by the Kinect v2 sensor. The central point and normal vector of the axis of rotation of the turntable are calculated automatically. The coordinate systems of RGB-D images captured at various view angles are unified based on the central point and normal vector of the axis of the turntable to achieve coarse registration. Then, the iterative closest point algorithm is used to perform multiview point cloud precise registration, thereby achieving rapid 3D point cloud reconstruction of the greenhouse plant. The greenhouse tomato plants were selected as measurement objects in this study. Research results show that the proposed 3D point cloud reconstruction method was highly accurate and stable in performance, and can be used to reconstruct 3D point clouds for high-throughput plant phenotyping analysis and to extract the morphological parameters of plants.


Author(s):  
Romina Dastoorian ◽  
Ahmad E. Elhabashy ◽  
Wenmeng Tian ◽  
Lee J. Wells ◽  
Jaime A. Camelio

With the latest advancements in three-dimensional (3D) measurement technologies, obtaining 3D point cloud data for inspection purposes in manufacturing is becoming more common. While 3D point cloud data allows for better inspection capabilities, their analysis is typically challenging. Especially with unstructured 3D point cloud data, containing coordinates at random locations, the challenges increase with higher levels of noise and larger volumes of data. Hence, the objective of this paper is to extend the previously developed Adaptive Generalized Likelihood Ratio (AGLR) approach to handle unstructured 3D point cloud data used for automated surface defect inspection in manufacturing. More specifically, the AGLR approach was implemented in a practical case study to inspect twenty-seven samples, each with a unique fault. These faults were designed to cover an array of possible faults having three different sizes, three different magnitudes, and located in three different locations. The results show that the AGLR approach can indeed differentiate between non-faulty and a varying range of faulty surfaces while being able to pinpoint the fault location. This work also serves as a validation for the previously developed AGLR approach in a practical scenario.


Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1763
Author(s):  
Minsung Sung ◽  
Jason Kim ◽  
Hyeonwoo Cho ◽  
Meungsuk Lee ◽  
Son-Cheol Yu

This paper proposes a sonar-based underwater object classification method for autonomous underwater vehicles (AUVs) by reconstructing an object’s three-dimensional (3D) geometry. The point cloud of underwater objects can be generated from sonar images captured while the AUV passes over the object. Then, a neural network can predict the class given the generated point cloud. By reconstructing the 3D shape of the object, the proposed method can classify the object accurately through a straightforward training process. We verified the proposed method by performing simulations and field experiments.


Author(s):  
J. Wolf ◽  
S. Discher ◽  
L. Masopust ◽  
S. Schulz ◽  
R. Richter ◽  
...  

<p><strong>Abstract.</strong> Ground-penetrating 2D radar scans are captured in road environments for examination of pavement condition and below-ground variations such as lowerings and developing pot-holes. 3D point clouds captured above ground provide a precise digital representation of the road’s surface and the surrounding environment. If both data sources are captured for the same area, a combined visualization is a valuable tool for infrastructure maintenance tasks. This paper presents visualization techniques developed for the combined visual exploration of the data captured in road environments. Main challenges are the positioning of the ground radar data within the 3D environment and the reduction of occlusion for individual data sets. By projecting the measured ground radar data onto the precise trajectory of the scan, it can be displayed within the context of the 3D point cloud representation of the road environment. We show that customizable overlay, filtering, and cropping techniques enable insightful data exploration. A 3D renderer combines both data sources. To enable an inspection of areas of interest, ground radar data can be elevated above ground level for better visibility. An interactive lens approach enables to visualize data sources that are currently occluded by others. The visualization techniques prove to be a valuable tool for ground layer anomaly inspection and were evaluated in a real-world data set. The combination of 2D ground radar scans with 3D point cloud data improves data interpretation by giving context information (e.g., about manholes in the street) that can be directly accessed during evaluation.</p>


Sign in / Sign up

Export Citation Format

Share Document