scholarly journals MULTI-SCALE BASED EXTRACION OF VEGETATION FROM TERRESTRIAL LiDAR DATA FOR ASSESSING LOCAL LANDSCAPE

Author(s):  
T. Wakita ◽  
J. Susaki

In this study, we propose a method to accurately extract vegetation from terrestrial three-dimensional (3D) point clouds for estimating landscape index in urban areas. Extraction of vegetation in urban areas is challenging because the light returned by vegetation does not show as clear patterns as man-made objects and because urban areas may have various objects to discriminate vegetation from. The proposed method takes a multi-scale voxel approach to effectively extract different types of vegetation in complex urban areas. With two different voxel sizes, a process is repeated that calculates the eigenvalues of the planar surface using a set of points, classifies voxels using the approximate curvature of the voxel of interest derived from the eigenvalues, and examines the connectivity of the valid voxels. We applied the proposed method to two data sets measured in a residential area in Kyoto, Japan. The validation results were acceptable, with F-measures of approximately 95% and 92%. It was also demonstrated that several types of vegetation were successfully extracted by the proposed method whereas the occluded vegetation were omitted. We conclude that the proposed method is suitable for extracting vegetation in urban areas from terrestrial light detection and ranging (LiDAR) data. In future, the proposed method will be applied to mobile LiDAR data and the performance of the method against lower density of point clouds will be examined.

Author(s):  
Timo Hackel ◽  
Jan D. Wegner ◽  
Konrad Schindler

We describe an effective and efficient method for point-wise semantic classification of 3D point clouds. The method can handle unstructured and inhomogeneous point clouds such as those derived from static terrestrial LiDAR or photogammetric reconstruction; and it is computationally efficient, making it possible to process point clouds with many millions of points in a matter of minutes. The key issue, both to cope with strong variations in point density and to bring down computation time, turns out to be careful handling of neighborhood relations. By choosing appropriate definitions of a point’s (multi-scale) neighborhood, we obtain a feature set that is both expressive and fast to compute. We evaluate our classification method both on benchmark data from a mobile mapping platform and on a variety of large, terrestrial laser scans with greatly varying point density. The proposed feature set outperforms the state of the art with respect to per-point classification accuracy, while at the same time being much faster to compute.


Author(s):  
Timo Hackel ◽  
Jan D. Wegner ◽  
Konrad Schindler

We describe an effective and efficient method for point-wise semantic classification of 3D point clouds. The method can handle unstructured and inhomogeneous point clouds such as those derived from static terrestrial LiDAR or photogammetric reconstruction; and it is computationally efficient, making it possible to process point clouds with many millions of points in a matter of minutes. The key issue, both to cope with strong variations in point density and to bring down computation time, turns out to be careful handling of neighborhood relations. By choosing appropriate definitions of a point’s (multi-scale) neighborhood, we obtain a feature set that is both expressive and fast to compute. We evaluate our classification method both on benchmark data from a mobile mapping platform and on a variety of large, terrestrial laser scans with greatly varying point density. The proposed feature set outperforms the state of the art with respect to per-point classification accuracy, while at the same time being much faster to compute.


Author(s):  
M. Zaboli ◽  
H. Rastiveis ◽  
A. Shams ◽  
B. Hosseiny ◽  
W. A. Sarasua

Abstract. Automated analysis of three-dimensional (3D) point clouds has become a boon in Photogrammetry, Remote Sensing, Computer Vision, and Robotics. The aim of this paper is to compare classifying algorithms tested on an urban area point cloud acquired by a Mobile Terrestrial Laser Scanning (MTLS) system. The algorithms were tested based on local geometrical and radiometric descriptors. In this study, local descriptors such as linearity, planarity, intensity, etc. are initially extracted for each point by observing their neighbor points. These features are then imported to a classification algorithm to automatically label each point. Here, five powerful classification algorithms including k-Nearest Neighbors (k-NN), Gaussian Naive Bayes (GNB), Support Vector Machine (SVM), Multilayer Perceptron (MLP) Neural Network, and Random Forest (RF) are tested. Eight semantic classes are considered for each method in an equal condition. The best overall accuracy of 90% was achieved with the RF algorithm. The results proved the reliability of the applied descriptors and RF classifier for MTLS point cloud classification.


2021 ◽  
Vol 13 (13) ◽  
pp. 2485
Author(s):  
Yi-Chun Lin ◽  
Raja Manish ◽  
Darcy Bullock ◽  
Ayman Habib

Maintenance of roadside ditches is important to avoid localized flooding and premature failure of pavements. Scheduling effective preventative maintenance requires a reasonably detailed mapping of the ditch profile to identify areas in need of excavation to remove long-term sediment accumulation. This study utilizes high-resolution, high-quality point clouds collected by mobile LiDAR mapping systems (MLMS) for mapping roadside ditches and performing hydrological analyses. The performance of alternative MLMS units, including an unmanned aerial vehicle, an unmanned ground vehicle, a portable backpack system along with its vehicle-mounted version, a medium-grade wheel-based system, and a high-grade wheel-based system, is evaluated. Point clouds from all the MLMS units are in agreement within the ±3 cm range for solid surfaces and ±7 cm range for vegetated areas along the vertical direction. The portable backpack system that could be carried by a surveyor or mounted on a vehicle is found to be the most cost-effective method for mapping roadside ditches, followed by the medium-grade wheel-based system. Furthermore, a framework for ditch line characterization is proposed and tested using datasets acquired by the medium-grade wheel-based and vehicle-mounted portable systems over a state highway. An existing ground-filtering approach—cloth simulation—is modified to handle variations in point density of mobile LiDAR data. Hydrological analyses, including flow direction and flow accumulation, are applied to extract the drainage network from the digital terrain model (DTM). Cross-sectional/longitudinal profiles of the ditch are automatically extracted from the LiDAR data and visualized in 3D point clouds and 2D images. The slope derived from the LiDAR data turned out to be very close to the highway cross slope design standards of 2% on driving lanes, 4% on shoulders, and a 6-by-1 slope for ditch lines.


2021 ◽  
Vol 13 (15) ◽  
pp. 3021
Author(s):  
Bufan Zhao ◽  
Xianghong Hua ◽  
Kegen Yu ◽  
Xiaoxing He ◽  
Weixing Xue ◽  
...  

Urban object segmentation and classification tasks are critical data processing steps in scene understanding, intelligent vehicles and 3D high-precision maps. Semantic segmentation of 3D point clouds is the foundational step in object recognition. To identify the intersecting objects and improve the accuracy of classification, this paper proposes a segment-based classification method for 3D point clouds. This method firstly divides points into multi-scale supervoxels and groups them by proposed inverse node graph (IN-Graph) construction, which does not need to define prior information about the node, it divides supervoxels by judging the connection state of edges between them. This method reaches minimum global energy by graph cutting, obtains the structural segments as completely as possible, and retains boundaries at the same time. Then, the random forest classifier is utilized for supervised classification. To deal with the mislabeling of scattered fragments, higher-order CRF with small-label cluster optimization is proposed to refine the classification results. Experiments were carried out on mobile laser scan (MLS) point dataset and terrestrial laser scan (TLS) points dataset, and the results show that overall accuracies of 97.57% and 96.39% were obtained in the two datasets. The boundaries of objects were retained well, and the method achieved a good result in the classification of cars and motorcycles. More experimental analyses have verified the advantages of the proposed method and proved the practicability and versatility of the method.


Author(s):  
P.M.B. Torres ◽  
P. J. S. Gonçalves ◽  
J.M.M. Martins

Purpose – The purpose of this paper is to present a robotic motion compensation system, using ultrasound images, to assist orthopedic surgery. The robotic system can compensate for femur movements during bone drilling procedures. Although it may have other applications, the system was thought to be used in hip resurfacing (HR) prosthesis surgery to implant the initial guide tool. The system requires no fiducial markers implanted in the patient, by using only non-invasive ultrasound images. Design/methodology/approach – The femur location in the operating room is obtained by processing ultrasound (USA) and computer tomography (CT) images, obtained, respectively, in the intra-operative and pre-operative scenarios. During surgery, the bone position and orientation is obtained by registration of USA and CT three-dimensional (3D) point clouds, using an optical measurement system and also passive markers attached to the USA probe and to the drill. The system description, image processing, calibration procedures and results with simulated and real experiments are presented and described to illustrate the system in operation. Findings – The robotic system can compensate for femur movements, during bone drilling procedures. In most experiments, the update was always validated, with errors of 2 mm/4°. Originality/value – The navigation system is based entirely on the information extracted from images obtained from CT pre-operatively and USA intra-operatively. Contrary to current surgical systems, it does not use any type of implant in the bone to track the femur movements.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 201
Author(s):  
Michael Bekele Maru ◽  
Donghwan Lee ◽  
Kassahun Demissie Tola ◽  
Seunghee Park

Modeling a structure in the virtual world using three-dimensional (3D) information enhances our understanding, while also aiding in the visualization, of how a structure reacts to any disturbance. Generally, 3D point clouds are used for determining structural behavioral changes. Light detection and ranging (LiDAR) is one of the crucial ways by which a 3D point cloud dataset can be generated. Additionally, 3D cameras are commonly used to develop a point cloud containing many points on the external surface of an object around it. The main objective of this study was to compare the performance of optical sensors, namely a depth camera (DC) and terrestrial laser scanner (TLS) in estimating structural deflection. We also utilized bilateral filtering techniques, which are commonly used in image processing, on the point cloud data for enhancing their accuracy and increasing the application prospects of these sensors in structure health monitoring. The results from these sensors were validated by comparing them with the outputs from a linear variable differential transformer sensor, which was mounted on the beam during an indoor experiment. The results showed that the datasets obtained from both the sensors were acceptable for nominal deflections of 3 mm and above because the error range was less than ±10%. However, the result obtained from the TLS were better than those obtained from the DC.


2019 ◽  
Vol 93 (3) ◽  
pp. 411-429 ◽  
Author(s):  
Maria Immacolata Marzulli ◽  
Pasi Raumonen ◽  
Roberto Greco ◽  
Manuela Persia ◽  
Patrizia Tartarino

Abstract Methods for the three-dimensional (3D) reconstruction of forest trees have been suggested for data from active and passive sensors. Laser scanner technologies have become popular in the last few years, despite their high costs. Since the improvements in photogrammetric algorithms (e.g. structure from motion—SfM), photographs have become a new low-cost source of 3D point clouds. In this study, we use images captured by a smartphone camera to calculate dense point clouds of a forest plot using SfM. Eighteen point clouds were produced by changing the densification parameters (Image scale, Point density, Minimum number of matches) in order to investigate their influence on the quality of the point clouds produced. In order to estimate diameter at breast height (d.b.h.) and stem volumes, we developed an automatic method that extracts the stems from the point cloud and then models them with cylinders. The results show that Image scale is the most influential parameter in terms of identifying and extracting trees from the point clouds. The best performance with cylinder modelling from point clouds compared to field data had an RMSE of 1.9 cm and 0.094 m3, for d.b.h. and volume, respectively. Thus, for forest management and planning purposes, it is possible to use our photogrammetric and modelling methods to measure d.b.h., stem volume and possibly other forest inventory metrics, rapidly and without felling trees. The proposed methodology significantly reduces working time in the field, using ‘non-professional’ instruments and automating estimates of dendrometric parameters.


Sign in / Sign up

Export Citation Format

Share Document