scholarly journals Parallel Point Clouds: Hybrid Point Cloud Generation and 3D Model Enhancement via Virtual–Real Integration

2021 ◽  
Vol 13 (15) ◽  
pp. 2868
Author(s):  
Yonglin Tian ◽  
Xiao Wang ◽  
Yu Shen ◽  
Zhongzheng Guo ◽  
Zilei Wang ◽  
...  

Three-dimensional information perception from point clouds is of vital importance for improving the ability of machines to understand the world, especially for autonomous driving and unmanned aerial vehicles. Data annotation for point clouds is one of the most challenging and costly tasks. In this paper, we propose a closed-loop and virtual–real interactive point cloud generation and model-upgrading framework called Parallel Point Clouds (PPCs). To our best knowledge, this is the first time that the training model has been changed from an open-loop to a closed-loop mechanism. The feedback from the evaluation results is used to update the training dataset, benefiting from the flexibility of artificial scenes. Under the framework, a point-based LiDAR simulation model is proposed, which greatly simplifies the scanning operation. Besides, a group-based placing method is put forward to integrate hybrid point clouds, via locating candidate positions for virtual objects in real scenes. Taking advantage of the CAD models and mobile LiDAR devices, two hybrid point cloud datasets, i.e., ShapeKITTI and MobilePointClouds, are built for 3D detection tasks. With almost zero labor cost on data annotation for newly added objects, the models (PointPillars) trained with ShapeKITTI and MobilePointClouds achieved 78.6% and 60.0% of the average precision of the model trained with real data on 3D detection, respectively.

GigaScience ◽  
2021 ◽  
Vol 10 (5) ◽  
Author(s):  
Teng Miao ◽  
Weiliang Wen ◽  
Yinglun Li ◽  
Sheng Wu ◽  
Chao Zhu ◽  
...  

Abstract Background The 3D point cloud is the most direct and effective data form for studying plant structure and morphology. In point cloud studies, the point cloud segmentation of individual plants to organs directly determines the accuracy of organ-level phenotype estimation and the reliability of the 3D plant reconstruction. However, highly accurate, automatic, and robust point cloud segmentation approaches for plants are unavailable. Thus, the high-throughput segmentation of many shoots is challenging. Although deep learning can feasibly solve this issue, software tools for 3D point cloud annotation to construct the training dataset are lacking. Results We propose a top-to-down point cloud segmentation algorithm using optimal transportation distance for maize shoots. We apply our point cloud annotation toolkit for maize shoots, Label3DMaize, to achieve semi-automatic point cloud segmentation and annotation of maize shoots at different growth stages, through a series of operations, including stem segmentation, coarse segmentation, fine segmentation, and sample-based segmentation. The toolkit takes ∼4–10 minutes to segment a maize shoot and consumes 10–20% of the total time if only coarse segmentation is required. Fine segmentation is more detailed than coarse segmentation, especially at the organ connection regions. The accuracy of coarse segmentation can reach 97.2% that of fine segmentation. Conclusion Label3DMaize integrates point cloud segmentation algorithms and manual interactive operations, realizing semi-automatic point cloud segmentation of maize shoots at different growth stages. The toolkit provides a practical data annotation tool for further online segmentation research based on deep learning and is expected to promote automatic point cloud processing of various plants.


Author(s):  
T. O. Chan ◽  
D. D. Lichti

Lamp poles are one of the most abundant highway and community components in modern cities. Their supporting parts are primarily tapered octagonal cones specifically designed for wind resistance. The geometry and the positions of the lamp poles are important information for various applications. For example, they are important to monitoring deformation of aged lamp poles, maintaining an efficient highway GIS system, and also facilitating possible feature-based calibration of mobile LiDAR systems. In this paper, we present a novel geometric model for octagonal lamp poles. The model consists of seven parameters in which a rotation about the z-axis is included, and points are constrained by the trigonometric property of 2D octagons after applying the rotations. For the geometric fitting of the lamp pole point cloud captured by a terrestrial LiDAR, accurate initial parameter values are essential. They can be estimated by first fitting the points to a circular cone model and this is followed by some basic point cloud processing techniques. The model was verified by fitting both simulated and real data. The real data includes several lamp pole point clouds captured by: (1) Faro Focus 3D and (2) Velodyne HDL-32E. The fitting results using the proposed model are promising, and up to 2.9 mm improvement in fitting accuracy was realized for the real lamp pole point clouds compared to using the conventional circular cone model. The overall result suggests that the proposed model is appropriate and rigorous.


2020 ◽  
Vol 12 (18) ◽  
pp. 2923
Author(s):  
Tengfei Zhou ◽  
Xiaojun Cheng ◽  
Peng Lin ◽  
Zhenlun Wu ◽  
Ensheng Liu

Due to the existence of environmental or human factors, and because of the instrument itself, there are many uncertainties in point clouds, which directly affect the data quality and the accuracy of subsequent processing, such as point cloud segmentation, 3D modeling, etc. In this paper, to address this problem, stochastic information of point cloud coordinates is taken into account, and on the basis of the scanner observation principle within the Gauss–Helmert model, a novel general point-based self-calibration method is developed for terrestrial laser scanners, incorporating both five additional parameters and six exterior orientation parameters. For cases where the instrument accuracy is different from the nominal ones, the variance component estimation algorithm is implemented for reweighting the outliers after the residual errors of observations obtained. Considering that the proposed method essentially is a nonlinear model, the Gauss–Newton iteration method is applied to derive the solutions of additional parameters and exterior orientation parameters. We conducted experiments using simulated and real data and compared them with those two existing methods. The experimental results showed that the proposed method could improve the point accuracy from 10−4 to 10−8 (a priori known) and 10−7 (a priori unknown), and reduced the correlation among the parameters (approximately 60% of volume). However, it is undeniable that some correlations increased instead, which is the limitation of the general method.


Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1573 ◽  
Author(s):  
Haojie Liu ◽  
Kang Liao ◽  
Chunyu Lin ◽  
Yao Zhao ◽  
Meiqin Liu

LiDAR sensors can provide dependable 3D spatial information at a low frequency (around 10 Hz) and have been widely applied in the field of autonomous driving and unmanned aerial vehicle (UAV). However, the camera with a higher frequency (around 20 Hz) has to be decreased so as to match with LiDAR in a multi-sensor system. In this paper, we propose a novel Pseudo-LiDAR interpolation network (PLIN) to increase the frequency of LiDAR sensor data. PLIN can generate temporally and spatially high-quality point cloud sequences to match the high frequency of cameras. To achieve this goal, we design a coarse interpolation stage guided by consecutive sparse depth maps and motion relationship. We also propose a refined interpolation stage guided by the realistic scene. Using this coarse-to-fine cascade structure, our method can progressively perceive multi-modal information and generate accurate intermediate point clouds. To the best of our knowledge, this is the first deep framework for Pseudo-LiDAR point cloud interpolation, which shows appealing applications in navigation systems equipped with LiDAR and cameras. Experimental results demonstrate that PLIN achieves promising performance on the KITTI dataset, significantly outperforming the traditional interpolation method and the state-of-the-art video interpolation technique.


Author(s):  
P. Polewski ◽  
A. Erickson ◽  
W. Yao ◽  
N. Coops ◽  
P. Krzystek ◽  
...  

Airborne Laser Scanning (ALS) and terrestrial photogrammetry are methods applicable for mapping forested environments. While ground-based techniques provide valuable information about the forest understory, the measured point clouds are normally expressed in a local coordinate system, whose transformation into a georeferenced system requires additional effort. In contrast, ALS point clouds are usually georeferenced, yet the point density near the ground may be poor under dense overstory conditions. In this work, we propose to combine the strengths of the two data sources by co-registering the respective point clouds, thus enriching the georeferenced ALS point cloud with detailed understory information in a fully automatic manner. Due to markedly different sensor characteristics, coregistration methods which expect a high geometric similarity between keypoints are not suitable in this setting. Instead, our method focuses on the object (tree stem) level. We first calculate approximate stem positions in the terrestrial and ALS point clouds and construct, for each stem, a descriptor which quantifies the 2D and vertical distances to other stem centers (at ground height). Then, the similarities between all descriptor pairs from the two point clouds are calculated, and standard graph maximum matching techniques are employed to compute corresponding stem pairs (tiepoints). Finally, the tiepoint subset yielding the optimal rigid transformation between the terrestrial and ALS coordinate systems is determined. We test our method on simulated tree positions and a plot situated in the northern interior of the Coast Range in western Oregon, USA, using ALS data (76&thinsp;x&thinsp;121&thinsp;m<sup>2</sup>) and a photogrammetric point cloud (33&thinsp;x&thinsp;35&thinsp;m<sup>2</sup>) derived from terrestrial photographs taken with a handheld camera. Results on both simulated and real data show that the proposed stem descriptors are discriminative enough to derive good correspondences. Specifically, for the real plot data, 24 corresponding stems were coregistered with an average 2D position deviation of 66&thinsp;cm.


2021 ◽  
Vol 13 (24) ◽  
pp. 5071
Author(s):  
Jing Zhang ◽  
Jiajun Wang ◽  
Da Xu ◽  
Yunsong Li

The use of LiDAR point clouds for accurate three-dimensional perception is crucial for realizing high-level autonomous driving systems. Upon considering the drawbacks of the current point cloud object-detection algorithms, this paper proposes HCNet, an algorithm that combines an attention mechanism with adaptive adjustment, starting from feature fusion and overcoming the sparse and uneven distribution of point clouds. Inspired by the basic idea of an attention mechanism, a feature-fusion structure HC module with height attention and channel attention, weighted in parallel, is proposed to perform feature-fusion on multiple pseudo images. The use of several weighting mechanisms enhances the ability of feature-information expression. Additionally, we designed an adaptively adjusted detection head that also overcomes the sparsity of the point cloud from the perspective of original information fusion. It reduces the interference caused by the uneven distribution of the point cloud from the perspective of adaptive adjustment. The results show that our HCNet has better accuracy than other one-stage-network or even two-stage-network RCNNs under some evaluation detection metrics. Additionally, it has a detection rate of 30FPS. Especially for hard samples, the algorithm in this paper has better detection performance than many existing algorithms.


2018 ◽  
Vol 42 (3) ◽  
pp. 457-467 ◽  
Author(s):  
A. N. Kamaev ◽  
D. A. Karmanov

A task of autonomous underwater vehicle (AUV) navigation is considered in the paper. The images obtained from an onboard stereo camera are used to build point clouds attached to a particular AUV position. Quantized SIFT descriptors of points are stored in a metric tree to organize an effective search procedure using a best bin first approach. Correspondences for a new point cloud are searched in a compact group of point clouds that have the largest number of similar descriptors stored in the tree. The new point cloud can be positioned relative to the other clouds without any prior information about the AUV position and uncertainty of this position. This approach increases the reliability of the AUV navigation system and makes it insensitive to data losses, textureless seafloor regions and long passes without trajectory intersections. Several algorithms are described in the paper: an algorithm of point clouds computation, an algorithm for establishing point clouds correspondence, and an algorithm of building groups of potentially linked point clouds to speedup the global search of correspondences. The general navigation algorithm consisting of three parallel subroutines: image adding, search tree updating, and global optimization is also presented. The proposed navigation system is tested on real and synthetic data. Tests on real data showed that the trajectory can be built even for an image sequence with 60% data losses with successive images that have either small or zero overlap. Tests on synthetic data showed that the constructed trajectory is close to the true one even for long missions. The average speed of image processing by the proposed navigation system is about 3 frames per second with  a middle-price desktop CPU.


2021 ◽  
Vol 13 (22) ◽  
pp. 4497
Author(s):  
Jianjun Zou ◽  
Zhenxin Zhang ◽  
Dong Chen ◽  
Qinghua Li ◽  
Lan Sun ◽  
...  

Point cloud registration is the foundation and key step for many vital applications, such as digital city, autonomous driving, passive positioning, and navigation. The difference of spatial objects and the structure complexity of object surfaces are the main challenges for the registration problem. In this paper, we propose a graph attention capsule model (named as GACM) for the efficient registration of terrestrial laser scanning (TLS) point cloud in the urban scene, which fuses graph attention convolution and a three-dimensional (3D) capsule network to extract local point cloud features and obtain 3D feature descriptors. These descriptors can take into account the differences of spatial structure and point density in objects and make the spatial features of ground objects more prominent. During the training progress, we used both matched points and non-matched points to train the model. In the test process of the registration, the points in the neighborhood of each keypoint were sent to the trained network, in order to obtain feature descriptors and calculate the rotation and translation matrix after constructing a K-dimensional (KD) tree and random sample consensus (RANSAC) algorithm. Experiments show that the proposed method achieves more efficient registration results and higher robustness than other frontier registration methods in the pairwise registration of point clouds.


Author(s):  
Y. Xia ◽  
W. Liu ◽  
Z. Luo ◽  
Y. Xu ◽  
U. Stilla

Abstract. Completing the 3D shape of vehicles from real scan data, which aims to estimate the complete geometry of vehicles from partial inputs, acts as a role in the field of remote sensing and autonomous driving. With the recent popularity of deep learning, plenty of data-driven methods have been proposed. However, most of them usually require additional information as prior knowledge for the input, for example, semantic labels and symmetry assumptions. In this paper, we design a novel and end-to-end network, termed as S2U-Net, to achieve the completion of 3D shapes of vehicles from the partial and sparse point clouds. Our network includes two modules of the encoder and the generator. The encoder is designed to extract the global feature of the incomplete and sparse point cloud while the generator is designed to produce fine-grained and dense completion. Specially, we adopt an upsampling strategy to output a more uniform point cloud. Experimental results in the KITTI dataset illustrate our method achieves better performance than the state-of-arts in terms of distribution uniformity and completion quality. Specifically, we improve the translation accuracy by 50.8% and rotation accuracy by 40.6% evaluating completed results with a point cloud registration task.


2004 ◽  
Vol 43 (03) ◽  
pp. 296-301 ◽  
Author(s):  
R. Takalo ◽  
J. P. Saul ◽  
I. Korhonen

Summary Objectives: Both open- and closed-loop models of beat-to-beat cardiovascular control have been suggested. We tested whether the modelling yields different results with real data while assessing cardiopulmonary and baroreflex gains. Methods: Two autoregressive models are described to resolve causal relationships between systolic blood pressure (SBP), RR-interval (RRI) and instantaneous lung volume (ILV): a closed-loop model which takes into account both the RRI changes induced by changes in SBP and the SBP changes mediated by changes in RRI, and an open-loop model which does not have a link from RRI to SBP. The performance of the models was compared in 14 healthy men in supine and standing positions under control conditions and under conditions of β-sympathetic and parasympathetic pharmacological blockades. Transfer function gains were computed from ILV to RRI (cardiopulmonary gain) and from SBP to RRI (baroreflex gain). The measurements were done under controlled random-interval breathing. Results: The gains identified by the open-loop model tended to be higher than those from the closed-loop model, but the differences did not reach statistical significance. Importantly, the two models discriminated the changes in transfer gains between different interventions equally well. Conclusions: Because the interactions between SBP and RRI occur physiologically in a closed-loop condition, the closed-loop model provides a theoretical advantage over the open-loop model. However, in practise, it seems to be little reason to select one over the other due to methodological errors when estimating cardiopulmonary or baroreflex transfer gains.


Sign in / Sign up

Export Citation Format

Share Document