scholarly journals Automated Extraction of 3D Trees from Mobile LiDAR Point Clouds

Author(s):  
Y. Yu ◽  
J. Li ◽  
H. Guan ◽  
D. Zai ◽  
C. Wang

This paper presents an automated algorithm for extracting 3D trees directly from 3D mobile light detection and ranging (LiDAR) data. To reduce both computational and spatial complexities, ground points are first filtered out from a raw 3D point cloud via blockbased elevation filtering. Off-ground points are then grouped into clusters representing individual objects through Euclidean distance clustering and voxel-based normalized cut segmentation. Finally, a model-driven method is proposed to achieve the extraction of 3D trees based on a pairwise 3D shape descriptor. The proposed algorithm is tested using a set of mobile LiDAR point clouds acquired by a RIEGL VMX-450 system. The results demonstrate the feasibility and effectiveness of the proposed algorithm.

2020 ◽  
Vol 12 (10) ◽  
pp. 1677 ◽  
Author(s):  
Ana Novo ◽  
Noelia Fariñas-Álvarez ◽  
Joaquin Martínez-Sánchez ◽  
Higinio González-Jorge ◽  
Henrique Lorenzo

The optimization of forest management in the surroundings of roads is a necessary task in term of wildfire prevention and the mitigation of their effects. One of the reasons why a forest fire spreads is the presence of contiguous flammable material, both horizontally and vertically and, thus, vegetation management becomes essential in preventive actions. This work presents a methodology to detect the continuity of vegetation based on aerial Light Detection and Ranging (LiDAR) point clouds, in combination with point cloud processing techniques. Horizontal continuity is determined by calculating Cover Canopy Fraction (CCF). The results obtained show 50% of shrubs presence and 33% of trees presence in the selected case of study, with an error of 5.71%. Regarding vertical continuity, a forest structure composed of a single stratum represents 81% of the zone. In addition, the vegetation located in areas around the roads were mapped, taking into consideration the distances established in the applicable law. Analyses show that risky areas range from a total of 0.12 ha in a 2 m buffer and 0.48 ha in a 10 m buffer, representing a 2.4% and 9.5% of the total study area, respectively.


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Ruizhen Gao ◽  
Xiaohui Li ◽  
Jingjun Zhang

With the emergence of new intelligent sensing technologies such as 3D scanners and stereo vision, high-quality point clouds have become very convenient and lower cost. The research of 3D object recognition based on point clouds has also received widespread attention. Point clouds are an important type of geometric data structure. Because of its irregular format, many researchers convert this data into regular three-dimensional voxel grids or image collections. However, this can lead to unnecessary bulk of data and cause problems. In this paper, we consider the problem of recognizing objects in realistic senses. We first use Euclidean distance clustering method to segment objects in realistic scenes. Then we use a deep learning network structure to directly extract features of the point cloud data to recognize the objects. Theoretically, this network structure shows strong performance. In experiment, there is an accuracy rate of 98.8% on the training set, and the accuracy rate in the experimental test set can reach 89.7%. The experimental results show that the network structure in this paper can accurately identify and classify point cloud objects in realistic scenes and maintain a certain accuracy when the number of point clouds is small, which is very robust.


Author(s):  
B. Xiong ◽  
S. Oude Elberink ◽  
G. Vosselman

The Multi-View Stereo (MVS) technology has improved significantly in the last decade, providing a much denser and more accurate point cloud than before. The point cloud now becomes a valuable data for modelling the LOD2 buildings. However, it is still not accurate enough to replace the lidar point cloud. Its relative high level of noise prevents the accurate interpretation of roof faces, e.g. one planar roof face has uneven surface of points therefore is segmented into many parts. The derived roof topology graphs are quite erroneous and cannot be used to model the buildings using the current methods based on roof topology graphs. We propose a parameter-free algorithm to robustly and precisely derive roof structures and building models. The points connecting roof segments are searched and grouped as structure points and structure boundaries, accordingly presenting the roof corners and boundaries. Their geometries are computed by the plane equations of their attached roof segments. If data available, the algorithm guarantees complete building structures in noisy point clouds and meanwhile achieves global optimized models. Experiments show that, when comparing to the roof topology graph based methods, the novel algorithm achieves consistent quality for both lidar and photogrammetric point clouds. But the new method is fully automatic and is a good alternative for the model-driven method when the processing time is important.


Author(s):  
Shenman Zhang ◽  
Pengjie Tao

Recent advances in open data initiatives allow us to free access to a vast amount of open LiDAR data in many cities. However, most of these open LiDAR data over cities are acquired by airborne scanning, where the points on façades are sparse or even completely missing due to the viewpoint and object occlusions in the urban environment. Integrating other sources of data, such as ground images, to complete the missing parts is an effective and practical solution. This paper presents an approach for improving open LiDAR data coverage on building façades by using point cloud generated from ground images. A coarse-to-fine strategy is proposed to fuse these two different sources of data. Firstly, the façade point cloud generated from terrestrial images is initially geolocated by matching the SFM camera positions to their GPS meta-information. Next, an improved Coherent Point Drift algorithm with normal consistency is proposed to accurately align building façades to open LiDAR data. The significance of the work resides in the use of 2D overlapping points on the outline of buildings instead of limited 3D overlap between the two point clouds and the achievement to a reliable and precise registration under possible incomplete coverage and ambiguous correspondence. Experiments show that the proposed approach can significantly improve the façades details of buildings in open LiDAR data and improving registration accuracy from up to 10 meters to less than half a meter compared to classic registration methods.


2020 ◽  
Author(s):  
Joanna Stanisz ◽  
Konrad Lis ◽  
Tomasz Kryjak ◽  
Marek Gorgon

In this paper we present our research on the optimisation of a deep neural network for 3D object detection in a point cloud. Techniques like quantisation and pruning available in the Brevitas and PyTorch tools were used. We performed the experiments for the PointPillars network, which offers a reasonable compromise between detection accuracy and calculation complexity. The aim of this work was to propose a variant of the network which we will ultimately implement in an FPGA device. This will allow for real-time LiDAR data processing with low energy consumption. The obtained results indicate that even a significant quantisation from 32-bit floating point to 2-bit integer in the main part of the algorithm, results in 5%-9% decrease of the detection accuracy, while allowing for almost a 16-fold reduction in size of the model.


Author(s):  
Shenman Zhang ◽  
Jie Shan ◽  
Zhichao Zhang ◽  
Jixing Yan ◽  
Yaolin Hou

A complete building model reconstruction needs data collected from both air and ground. The former often has sparse coverage on building façades, while the latter usually is unable to observe the building rooftops. Attempting to solve the missing data issues in building reconstruction from single data source, we describe an approach for complete building reconstruction that integrates airborne LiDAR data and ground smartphone imagery. First, by taking advantages of GPS and digital compass information embedded in the image metadata of smartphones, we are able to find airborne LiDAR point clouds for the corresponding buildings in the images. In the next step, Structure-from-Motion and dense multi-view stereo algorithms are applied to generate building point cloud from multiple ground images. The third step extracts building outlines respectively from the LiDAR point cloud and the ground image point cloud. An automated correspondence between these two sets of building outlines allows us to achieve a precise registration and combination of the two point clouds, which ultimately results in a complete and full resolution building model. The developed approach overcomes the problem of sparse points on building façades in airborne LiDAR and the deficiency of rooftops in ground images such that the merits of both datasets are utilized.


2020 ◽  
Vol 9 (7) ◽  
pp. 450
Author(s):  
Zhen Ye ◽  
Yusheng Xu ◽  
Rong Huang ◽  
Xiaohua Tong ◽  
Xin Li ◽  
...  

The semantic labeling of the urban area is an essential but challenging task for a wide variety of applications such as mapping, navigation, and monitoring. The rapid advance in Light Detection and Ranging (LiDAR) systems provides this task with a possible solution using 3D point clouds, which are accessible, affordable, accurate, and applicable. Among all types of platforms, the airborne platform with LiDAR can serve as an efficient and effective tool for large-scale 3D mapping in the urban area. Against this background, a large number of algorithms and methods have been developed to fully explore the potential of 3D point clouds. However, the creation of publicly accessible large-scale annotated datasets, which are critical for assessing the performance of the developed algorithms and methods, is still at an early age. In this work, we present a large-scale aerial LiDAR point cloud dataset acquired in a highly-dense and complex urban area for the evaluation of semantic labeling methods. This dataset covers an urban area with highly-dense buildings of approximately 1 km2 and includes more than three million points with five classes of objects labeled. Moreover, experiments are carried out with the results from several baseline methods, demonstrating the feasibility and capability of the dataset serving as a benchmark for assessing semantic labeling methods.


Atmosphere ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 738
Author(s):  
Karl Montalban ◽  
Christophe Reymann ◽  
Dinesh Atchuthan ◽  
Paul-Edouard Dupouy ◽  
Nicolas Riviere ◽  
...  

Light Detection And Ranging sensors (lidar) are key to autonomous driving, but their data is severely impacted by weather events (rain, fog, snow). To increase the safety and availability of self-driving vehicles, the analysis of the phenomena of the consequences at stake is necessary. This paper presents experiments performed in a climatic chamber with lidars of different technologies (spinning, Risley prisms, micro-motion and MEMS) that are compared in various artificial rain and fog conditions. A specific target with calibrated reflectance is used to make a first quantitative analysis. We observe different results depending on the sensors, and unexpected behaviors in the analysis with artificial rain are seen where higher rain rates do not necessarily mean higher degradations on lidar data.


Author(s):  
Shenman Zhang ◽  
Jie Shan ◽  
Zhichao Zhang ◽  
Jixing Yan ◽  
Yaolin Hou

A complete building model reconstruction needs data collected from both air and ground. The former often has sparse coverage on building façades, while the latter usually is unable to observe the building rooftops. Attempting to solve the missing data issues in building reconstruction from single data source, we describe an approach for complete building reconstruction that integrates airborne LiDAR data and ground smartphone imagery. First, by taking advantages of GPS and digital compass information embedded in the image metadata of smartphones, we are able to find airborne LiDAR point clouds for the corresponding buildings in the images. In the next step, Structure-from-Motion and dense multi-view stereo algorithms are applied to generate building point cloud from multiple ground images. The third step extracts building outlines respectively from the LiDAR point cloud and the ground image point cloud. An automated correspondence between these two sets of building outlines allows us to achieve a precise registration and combination of the two point clouds, which ultimately results in a complete and full resolution building model. The developed approach overcomes the problem of sparse points on building façades in airborne LiDAR and the deficiency of rooftops in ground images such that the merits of both datasets are utilized.


Author(s):  
H. Tran ◽  
K. Khoshelham

<p><strong>Abstract.</strong> Automated reconstruction of 3D interior models has recently been a topic of intensive research due to its wide range of applications in Architecture, Engineering, and Construction. However, generation of the 3D models from LiDAR data and/or RGB-D data is challenged by not only the complexity of building geometries, but also the presence of clutters and the inevitable defects of the input data. In this paper, we propose a stochastic approach for automatic reconstruction of 3D models of interior spaces from point clouds, which is applicable to both Manhattan and non-Manhattan world buildings. The building interior is first partitioned into a set of 3D shapes as an arrangement of permanent structures. An optimization process is then applied to search for the most probable model as the optimal configuration of the 3D shapes using the reversible jump Markov Chain Monte Carlo (rjMCMC) sampling with the Metropolis-Hastings algorithm. This optimization is not based only on the input data, but also takes into account the intermediate stages of the model during the modelling process. Consequently, it enhances the robustness of the proposed approach to inaccuracy and incompleteness of the point cloud. The feasibility of the proposed approach is evaluated on a synthetic and an ISPRS benchmark dataset.</p>


Sign in / Sign up

Export Citation Format

Share Document