scholarly journals STREET TREE INFORMATION EXTRACTION AND DYNAMICS ANALYSIS FROM MOBILE LIDAR POINT CLOUD

Author(s):  
Y. Q. Li ◽  
H. Y. Liu ◽  
Y. K. Liu ◽  
S. B. Zhao ◽  
P. P. Li ◽  
...  

Abstract. Street trees are common features and important assets in urban scenes. They are huge in numbers and are constantly changing, thus are difficult to monitor on a regular basis. A method of automatic extraction and dynamic analysis of street trees based on mobile LiDAR data is proposed. First, ground and low objects are filtered from the point clouds. Then, based on a geometric tree model and semantic information, each tree point cloud is extracted, and geometrical parameters such as location, trunk diameter, trunk structure line, tree height, crown width, and crown volume of each tree is obtained. A dynamic analysis combined with the growing characteristics of trees is conducted to compare and analyse the street trees from different epochs, in order to understand whether the trees have grown or been pruned, replanted, or displaced. The proposed algorithm was tested on three epochs of mobile LiDAR data, obtained in 2010, 2016 and 2018, respectively. Experimental results showed that the proposed method was able to accurately detect trees and extract tree parameters for detailed dynamics analysis.

2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Wuming Zhang ◽  
Shangshu Cai ◽  
Xinlian Liang ◽  
Jie Shao ◽  
Ronghai Hu ◽  
...  

Abstract Background The universal occurrence of randomly distributed dark holes (i.e., data pits appearing within the tree crown) in LiDAR-derived canopy height models (CHMs) negatively affects the accuracy of extracted forest inventory parameters. Methods We develop an algorithm based on cloth simulation for constructing a pit-free CHM. Results The proposed algorithm effectively fills data pits of various sizes whilst preserving canopy details. Our pit-free CHMs derived from point clouds at different proportions of data pits are remarkably better than those constructed using other algorithms, as evidenced by the lowest average root mean square error (0.4981 m) between the reference CHMs and the constructed pit-free CHMs. Moreover, our pit-free CHMs show the best performance overall in terms of maximum tree height estimation (average bias = 0.9674 m). Conclusion The proposed algorithm can be adopted when working with different quality LiDAR data and shows high potential in forestry applications.


Forests ◽  
2021 ◽  
Vol 12 (8) ◽  
pp. 1020
Author(s):  
Yanqi Dong ◽  
Guangpeng Fan ◽  
Zhiwu Zhou ◽  
Jincheng Liu ◽  
Yongguo Wang ◽  
...  

The quantitative structure model (QSM) contains the branch geometry and attributes of the tree. AdQSM is a new, accurate, and detailed tree QSM. In this paper, an automatic modeling method based on AdQSM is developed, and a low-cost technical scheme of tree structure modeling is provided, so that AdQSM can be freely used by more people. First, we used two digital cameras to collect two-dimensional (2D) photos of trees and generated three-dimensional (3D) point clouds of plot and segmented individual tree from the plot point clouds. Then a new QSM-AdQSM was used to construct tree model from point clouds of 44 trees. Finally, to verify the effectiveness of our method, the diameter at breast height (DBH), tree height, and trunk volume were derived from the reconstructed tree model. These parameters extracted from AdQSM were compared with the reference values from forest inventory. For the DBH, the relative bias (rBias), root mean square error (RMSE), and coefficient of variation of root mean square error (rRMSE) were 4.26%, 1.93 cm, and 6.60%. For the tree height, the rBias, RMSE, and rRMSE were—10.86%, 1.67 m, and 12.34%. The determination coefficient (R2) of DBH and tree height estimated by AdQSM and the reference value were 0.94 and 0.86. We used the trunk volume calculated by the allometric equation as a reference value to test the accuracy of AdQSM. The trunk volume was estimated based on AdQSM, and its bias was 0.07066 m3, rBias was 18.73%, RMSE was 0.12369 m3, rRMSE was 32.78%. To better evaluate the accuracy of QSM’s reconstruction of the trunk volume, we compared AdQSM and TreeQSM in the same dataset. The bias of the trunk volume estimated based on TreeQSM was −0.05071 m3, and the rBias was −13.44%, RMSE was 0.13267 m3, rRMSE was 35.16%. At 95% confidence interval level, the concordance correlation coefficient (CCC = 0.77) of the agreement between the estimated tree trunk volume of AdQSM and the reference value was greater than that of TreeQSM (CCC = 0.60). The significance of this research is as follows: (1) The automatic modeling method based on AdQSM is developed, which expands the application scope of AdQSM; (2) provide low-cost photogrammetric point cloud as the input data of AdQSM; (3) explore the potential of AdQSM to reconstruct forest terrestrial photogrammetric point clouds.


Author(s):  
Shenman Zhang ◽  
Pengjie Tao

Recent advances in open data initiatives allow us to free access to a vast amount of open LiDAR data in many cities. However, most of these open LiDAR data over cities are acquired by airborne scanning, where the points on façades are sparse or even completely missing due to the viewpoint and object occlusions in the urban environment. Integrating other sources of data, such as ground images, to complete the missing parts is an effective and practical solution. This paper presents an approach for improving open LiDAR data coverage on building façades by using point cloud generated from ground images. A coarse-to-fine strategy is proposed to fuse these two different sources of data. Firstly, the façade point cloud generated from terrestrial images is initially geolocated by matching the SFM camera positions to their GPS meta-information. Next, an improved Coherent Point Drift algorithm with normal consistency is proposed to accurately align building façades to open LiDAR data. The significance of the work resides in the use of 2D overlapping points on the outline of buildings instead of limited 3D overlap between the two point clouds and the achievement to a reliable and precise registration under possible incomplete coverage and ambiguous correspondence. Experiments show that the proposed approach can significantly improve the façades details of buildings in open LiDAR data and improving registration accuracy from up to 10 meters to less than half a meter compared to classic registration methods.


Author(s):  
Y. Yu ◽  
J. Li ◽  
H. Guan ◽  
D. Zai ◽  
C. Wang

This paper presents an automated algorithm for extracting 3D trees directly from 3D mobile light detection and ranging (LiDAR) data. To reduce both computational and spatial complexities, ground points are first filtered out from a raw 3D point cloud via blockbased elevation filtering. Off-ground points are then grouped into clusters representing individual objects through Euclidean distance clustering and voxel-based normalized cut segmentation. Finally, a model-driven method is proposed to achieve the extraction of 3D trees based on a pairwise 3D shape descriptor. The proposed algorithm is tested using a set of mobile LiDAR point clouds acquired by a RIEGL VMX-450 system. The results demonstrate the feasibility and effectiveness of the proposed algorithm.


2020 ◽  
Author(s):  
Joanna Stanisz ◽  
Konrad Lis ◽  
Tomasz Kryjak ◽  
Marek Gorgon

In this paper we present our research on the optimisation of a deep neural network for 3D object detection in a point cloud. Techniques like quantisation and pruning available in the Brevitas and PyTorch tools were used. We performed the experiments for the PointPillars network, which offers a reasonable compromise between detection accuracy and calculation complexity. The aim of this work was to propose a variant of the network which we will ultimately implement in an FPGA device. This will allow for real-time LiDAR data processing with low energy consumption. The obtained results indicate that even a significant quantisation from 32-bit floating point to 2-bit integer in the main part of the algorithm, results in 5%-9% decrease of the detection accuracy, while allowing for almost a 16-fold reduction in size of the model.


Author(s):  
Shenman Zhang ◽  
Jie Shan ◽  
Zhichao Zhang ◽  
Jixing Yan ◽  
Yaolin Hou

A complete building model reconstruction needs data collected from both air and ground. The former often has sparse coverage on building façades, while the latter usually is unable to observe the building rooftops. Attempting to solve the missing data issues in building reconstruction from single data source, we describe an approach for complete building reconstruction that integrates airborne LiDAR data and ground smartphone imagery. First, by taking advantages of GPS and digital compass information embedded in the image metadata of smartphones, we are able to find airborne LiDAR point clouds for the corresponding buildings in the images. In the next step, Structure-from-Motion and dense multi-view stereo algorithms are applied to generate building point cloud from multiple ground images. The third step extracts building outlines respectively from the LiDAR point cloud and the ground image point cloud. An automated correspondence between these two sets of building outlines allows us to achieve a precise registration and combination of the two point clouds, which ultimately results in a complete and full resolution building model. The developed approach overcomes the problem of sparse points on building façades in airborne LiDAR and the deficiency of rooftops in ground images such that the merits of both datasets are utilized.


Author(s):  
M Torabi ◽  
SM Mousavi G ◽  
D Younesian

In this paper, a flexible laser beam profiler is proposed to easily measure the profile of a train wheel for railway inspection. It only requires two laser beams (together and in parallel) to obtain two three-dimensional point-clouds based on the laser triangulation principle. Either the laser beam profiler or the wheel can be freely moved. The motion need not be known. The wheel profile will be obtained in two steps. First, the wheel axis position and orientation are obtained by minimizing the distance between one of the point-clouds and the other translated point-cloud, and translation is defined as a rotation of any point on the point-cloud around the wheel axis until it lies on the other point-cloud's laser plane. In the second step, the wheel profile is extracted by selecting one of the point-clouds and rotating it about the wheel axis and by finding the intersection of rotating points and a perpendicular plane, the perpendicular plane is defined as any arbitrary plane which passes through the wheel axis. This method is useful particularly for obtaining geometrical parameters of a wheel such as flange height, flange slope and flange thickness. In order to commission the proposed method, a prototype system was designed and manufactured. The performance of the system, evaluated in different circumstances, shows a measurement error of up to 2%. Compared with classical methods utilizing a caliper or those which use expensive equipment or additional parts such as reference guides, the proposed method is easy to use and flexible. Also, a novel calibration method is utilized to calibrate the system accurately and freely.


2020 ◽  
Vol 12 (18) ◽  
pp. 3043 ◽  
Author(s):  
Juan M. Jurado ◽  
Luís Pádua ◽  
Francisco R. Feito ◽  
Joaquim J. Sousa

The optimisation of vineyards management requires efficient and automated methods able to identify individual plants. In the last few years, Unmanned Aerial Vehicles (UAVs) have become one of the main sources of remote sensing information for Precision Viticulture (PV) applications. In fact, high resolution UAV-based imagery offers a unique capability for modelling plant’s structure making possible the recognition of significant geometrical features in photogrammetric point clouds. Despite the proliferation of innovative technologies in viticulture, the identification of individual grapevines relies on image-based segmentation techniques. In that way, grapevine and non-grapevine features are separated and individual plants are estimated usually considering a fixed distance between them. In this study, an automatic method for grapevine trunk detection, using 3D point cloud data, is presented. The proposed method focuses on the recognition of key geometrical parameters to ensure the existence of every plant in the 3D model. The method was tested in different commercial vineyards and to push it to its limit a vineyard characterised by several missing plants along the vine rows, irregular distances between plants and occluded trunks by dense vegetation in some areas, was also used. The proposed method represents a disruption in relation to the state of the art, and is able to identify individual trunks, posts and missing plants based on the interpretation and analysis of a 3D point cloud. Moreover, a validation process was carried out allowing concluding that the method has a high performance, especially when it is applied to 3D point clouds generated in phases in which the leaves are not yet very dense (January to May). However, if correct flight parametrizations are set, the method remains effective throughout the entire vegetative cycle.


2020 ◽  
Vol 3 (1) ◽  
pp. 21
Author(s):  
Xiuyun Lin ◽  
Yulin Gong ◽  
Yuan Sun ◽  
Jiawen Jiang ◽  
Yanli Zhang ◽  
...  

This study aims at searching for characteristic parameters of tree trunks to establish a volume model and dynamic analysis of volume based on terrestrial laser scanning (TLS). We collected three phases of data over 5 years from an artificial Liriodendron chinense forest. The upper diameters of the tree stump and tree height data were obtained by using the multi-station scanning method. A novel hierarchical TLS point cloud feature named the height cumulative percentage (Hz%) was designed. The shape of the upper tree trunk extracted by the point cloud was equivalent to that of the analytical tree with inflection points at 25% and 50% of the height, and the dynamic volume change of the model, which was established by hierarchical features, was highly related to the volume change of the actual point cloud extraction. The obtained results reflected the fact that the Hz% value provided by multi-station scanning was closely related to the characteristic stumpage parameters and could be used to invert the dynamic forest structure. The volume model established based on point cloud hierarchical parameters in this study could be used to monitor the dynamic changes of forest volume and to provide a new reference for applying TLS point clouds for the dynamic monitoring of forest resources.


Author(s):  
Shenman Zhang ◽  
Jie Shan ◽  
Zhichao Zhang ◽  
Jixing Yan ◽  
Yaolin Hou

A complete building model reconstruction needs data collected from both air and ground. The former often has sparse coverage on building façades, while the latter usually is unable to observe the building rooftops. Attempting to solve the missing data issues in building reconstruction from single data source, we describe an approach for complete building reconstruction that integrates airborne LiDAR data and ground smartphone imagery. First, by taking advantages of GPS and digital compass information embedded in the image metadata of smartphones, we are able to find airborne LiDAR point clouds for the corresponding buildings in the images. In the next step, Structure-from-Motion and dense multi-view stereo algorithms are applied to generate building point cloud from multiple ground images. The third step extracts building outlines respectively from the LiDAR point cloud and the ground image point cloud. An automated correspondence between these two sets of building outlines allows us to achieve a precise registration and combination of the two point clouds, which ultimately results in a complete and full resolution building model. The developed approach overcomes the problem of sparse points on building façades in airborne LiDAR and the deficiency of rooftops in ground images such that the merits of both datasets are utilized.


Sign in / Sign up

Export Citation Format

Share Document