scholarly journals APPLICABILITY ANALYSIS OF CLOTH SIMULATION FILTERING ALGORITHM FOR MOBILE LIDAR POINT CLOUD

Author(s):  
S. Cai ◽  
W. Zhang ◽  
J. Qi ◽  
P. Wan ◽  
J. Shao ◽  
...  

Classifying the original point clouds into ground and non-ground points is a key step in LiDAR (light detection and ranging) data post-processing. Cloth simulation filtering (CSF) algorithm, which based on a physical process, has been validated to be an accurate, automatic and easy-to-use algorithm for airborne LiDAR point cloud. As a new technique of three-dimensional data collection, the mobile laser scanning (MLS) has been gradually applied in various fields, such as reconstruction of digital terrain models (DTM), 3D building modeling and forest inventory and management. Compared with airborne LiDAR point cloud, there are some different features (such as point density feature, distribution feature and complexity feature) for mobile LiDAR point cloud. Some filtering algorithms for airborne LiDAR data were directly used in mobile LiDAR point cloud, but it did not give satisfactory results. In this paper, we explore the ability of the CSF algorithm for mobile LiDAR point cloud. Three samples with different shape of the terrain are selected to test the performance of this algorithm, which respectively yields total errors of 0.44 %, 0.77 % and1.20 %. Additionally, large area dataset is also tested to further validate the effectiveness of this algorithm, and results show that it can quickly and accurately separate point clouds into ground and non-ground points. In summary, this algorithm is efficient and reliable for mobile LiDAR point cloud.

2021 ◽  
Vol 11 (6) ◽  
pp. 2713
Author(s):  
Hyungjoon Seo

The bearing capacity of CFA (Continuous Flight Auger) pile is not able to reach the design capacity if proper construction is not performed due to the soil collapse at the bottom of the pile. In this paper, three pile samples were prepared to simulate the bottom of the CFA pile: grouting sample; mixture of grouting and gravel; mixture of grouting and sand. The failure surfaces of each sample obtained by a uniaxial compression tests were represented as a three-dimensional point cloud by three-dimensional laser scanning. Therefore, high resolution of point clouds can be obtained to simulate the failure surfaces of three samples. The three-dimensional point cloud of each failure surface was analyzed by a plane to points histogram (P2PH) method and a roughness detection method by kernel proposed in this paper. These methods can analyze the global roughness as well as the local roughness of the three pile samples in three dimensions. The roughness features of the grouting sample, the mixed sample of grouting and sand, and the mixed sample of grouting and gravel can be distinguished by the sections where points of each sample are predominantly distributed in the histogram of the proposed method.


Author(s):  
Y. Hori ◽  
T. Ogawa

The implementation of laser scanning in the field of archaeology provides us with an entirely new dimension in research and surveying. It allows us to digitally recreate individual objects, or entire cities, using millions of three-dimensional points grouped together in what is referred to as "point clouds". In addition, the visualization of the point cloud data, which can be used in the final report by archaeologists and architects, should usually be produced as a JPG or TIFF file. Not only the visualization of point cloud data, but also re-examination of older data and new survey of the construction of Roman building applying remote-sensing technology for precise and detailed measurements afford new information that may lead to revising drawings of ancient buildings which had been adduced as evidence without any consideration of a degree of accuracy, and finally can provide new research of ancient buildings. We used laser scanners at fields because of its speed, comprehensive coverage, accuracy and flexibility of data manipulation. Therefore, we “skipped” many of post-processing and focused on the images created from the meta-data simply aligned using a tool which extended automatic feature-matching algorithm and a popular renderer that can provide graphic results.


2020 ◽  
Vol 12 (6) ◽  
pp. 942 ◽  
Author(s):  
Maria Rosaria De Blasiis ◽  
Alessandro Di Benedetto ◽  
Margherita Fiani

The surface conditions of road pavements, including the occurrence and severity of distresses present on the surface, are an important indicator of pavement performance. Periodic monitoring and condition assessment is an essential requirement for the safety of vehicles moving on that road and the wellbeing of people. The traditional characterization of the different types of distress often involves complex activities, sometimes inefficient and risky, as they interfere with road traffic. The mobile laser systems (MLS) are now widely used to acquire detailed information about the road surface in terms of a three-dimensional point cloud. Despite its increasing use, there are still no standards for the acquisition and processing of the data collected. The aim of our work was to develop a procedure for processing the data acquired by MLS, in order to identify the localized degradations that mostly affect safety. We have studied the data flow and implemented several processing algorithms to identify and quantify a few types of distresses, namely potholes and swells/shoves, starting from very dense point clouds. We have implemented data processing in four steps: (i) editing of the point cloud to extract only the points belonging to the road surface, (ii) determination of the road roughness as deviation in height of every single point of the cloud with respect to the modeled road surface, (iii) segmentation of the distress (iv) computation of the main geometric parameters of the distress in order to classify it by severity levels. The results obtained by the proposed methodology are promising. The procedures implemented have made it possible to correctly segmented and identify the types of distress to be analyzed, in accordance with the on-site inspections. The tests carried out have shown that the choice of the values of some parameters to give as input to the software is not trivial: the choice of some of them is based on considerations related to the nature of the data, for others, it derives from the distress to be segmented. Due to the different possible configurations of the various distresses it is better to choose these parameters according to the boundary conditions and not to impose default values. The test involved a 100-m long urban road segment, the surface of which was measured with an MLS installed on a vehicle that traveled the road at 10 km/h.


Author(s):  
A. Nurunnabi ◽  
F. N. Teferle ◽  
J. Li ◽  
R. C. Lindenbergh ◽  
A. Hunegnaw

Abstract. Ground surface extraction is one of the classic tasks in airborne laser scanning (ALS) point cloud processing that is used for three-dimensional (3D) city modelling, infrastructure health monitoring, and disaster management. Many methods have been developed over the last three decades. Recently, Deep Learning (DL) has become the most dominant technique for 3D point cloud classification. DL methods used for classification can be categorized into end-to-end and non end-to-end approaches. One of the main challenges of using supervised DL approaches is getting a sufficient amount of training data. The main advantage of using a supervised non end-to-end approach is that it requires less training data. This paper introduces a novel local feature-based non end-to-end DL algorithm that generates a binary classifier for ground point filtering. It studies feature relevance, and investigates three models that are different combinations of features. This method is free from the limitations of point clouds’ irregular data structure and varying data density, which is the biggest challenge for using the elegant convolutional neural network. The new algorithm does not require transforming data into regular 3D voxel grids or any rasterization. The performance of the new method has been demonstrated through two ALS datasets covering urban environments. The method successfully labels ground and non-ground points in the presence of steep slopes and height discontinuity in the terrain. Experiments in this paper show that the algorithm achieves around 97% in both F1-score and model accuracy for ground point labelling.


Author(s):  
Gülhan Benli

Since the 2000s, terrestrial laser scanning, as one of the methods used to document historical edifices in protected areas, has taken on greater importance because it mitigates the difficulties associated with working on large areas and saves time while also making it possible to better understand all the particularities of the area. Through this technology, comprehensive point data (point clouds) about the surface of an object can be generated in a highly accurate three-dimensional manner. Furthermore, with the proper software this three-dimensional point cloud data can be transformed into three-dimensional rendering/mapping/modeling and quantitative orthophotographs. In this chapter, the study will present the results of terrestrial laser scanning and surveying which was used to obtain three-dimensional point clouds through three-dimensional survey measurements and scans of silhouettes of streets in Fatih in Historic Peninsula in Istanbul, which were then transposed into survey images and drawings. The study will also cite examples of the facade mapping using terrestrial laser scanning data in Istanbul Historic Peninsula Project.


2012 ◽  
Vol 226-228 ◽  
pp. 1892-1898
Author(s):  
Jian Qing Shi ◽  
Ting Chen Jiang ◽  
Ming Lian Jiao

Airborne LiDAR is a new kind of surveying technology of remote sensing which developed rapidly during recent years. Raw laser scanning point clouds data include terrain points, building points, vegetation points, outlier points, etc.. In order to generate digital elevation model (DEM) and three-dimensional city model,these point clouds data must be filtered. Mathematical morphology based filtering algorithm, slope based filtering algorithm, TIN based filtering algorithm, moving surface based filtering algorithm, scanning lines based filtering algorithm and so on several representative filtering algorithms for LiDAR point clouds data have been introduced and discussed and contrasted in this paper. Based on these algorithms summarize the studying progresss about the filtering algorithm of airborne LiDAR point clouds data in home and abroad. In the end, the paper gives an expectation which will provides a reference for the following relative study.


2021 ◽  
Vol 13 (22) ◽  
pp. 4497
Author(s):  
Jianjun Zou ◽  
Zhenxin Zhang ◽  
Dong Chen ◽  
Qinghua Li ◽  
Lan Sun ◽  
...  

Point cloud registration is the foundation and key step for many vital applications, such as digital city, autonomous driving, passive positioning, and navigation. The difference of spatial objects and the structure complexity of object surfaces are the main challenges for the registration problem. In this paper, we propose a graph attention capsule model (named as GACM) for the efficient registration of terrestrial laser scanning (TLS) point cloud in the urban scene, which fuses graph attention convolution and a three-dimensional (3D) capsule network to extract local point cloud features and obtain 3D feature descriptors. These descriptors can take into account the differences of spatial structure and point density in objects and make the spatial features of ground objects more prominent. During the training progress, we used both matched points and non-matched points to train the model. In the test process of the registration, the points in the neighborhood of each keypoint were sent to the trained network, in order to obtain feature descriptors and calculate the rotation and translation matrix after constructing a K-dimensional (KD) tree and random sample consensus (RANSAC) algorithm. Experiments show that the proposed method achieves more efficient registration results and higher robustness than other frontier registration methods in the pairwise registration of point clouds.


Author(s):  
B. Alsadik ◽  
M. Gerke ◽  
G. Vosselman

The ongoing development of advanced techniques in photogrammetry, computer vision (CV), robotics and laser scanning to efficiently acquire three dimensional geometric data offer new possibilities for many applications. The output of these techniques in the digital form is often a sparse or dense point cloud describing the 3D shape of an object. Viewing these point clouds in a computerized digital environment holds a difficulty in displaying the visible points of the object from a given viewpoint rather than the hidden points. This visibility problem is a major computer graphics topic and has been solved previously by using different mathematical techniques. However, to our knowledge, there is no study of presenting the different visibility analysis methods of point clouds from a photogrammetric viewpoint. The visibility approaches, which are surface based or voxel based, and the hidden point removal (HPR) will be presented. Three different problems in close range photogrammetry are presented: camera network design, guidance with synthetic images and the gap detection in a point cloud. The latter one introduces also a new concept of gap classification. Every problem utilizes a different visibility technique to show the valuable effect of visibility analysis on the final solution.


2021 ◽  
Vol 7 (1) ◽  
pp. 1-24
Author(s):  
Piotr Tompalski ◽  
Nicholas C. Coops ◽  
Joanne C. White ◽  
Tristan R.H. Goodbody ◽  
Chris R. Hennigar ◽  
...  

Abstract Purpose of Review The increasing availability of three-dimensional point clouds, including both airborne laser scanning and digital aerial photogrammetry, allow for the derivation of forest inventory information with a high level of attribute accuracy and spatial detail. When available at two points in time, point cloud datasets offer a rich source of information for detailed analysis of change in forest structure. Recent Findings Existing research across a broad range of forest types has demonstrated that those analyses can be performed using different approaches, levels of detail, or source data. By reviewing the relevant findings, we highlight the potential that bi- and multi-temporal point clouds have for enhanced analysis of forest growth. We divide the existing approaches into two broad categories— – approaches that focus on estimating change based on predictions of two or more forest inventory attributes over time, and approaches for forecasting forest inventory attributes. We describe how point clouds acquired at two or more points in time can be used for both categories of analysis by comparing input airborne datasets, before discussing the methods that were used, and resulting accuracies. Summary To conclude, we outline outstanding research gaps that require further investigation, including the need for an improved understanding of which three-dimensional datasets can be applied using certain methods. We also discuss the likely implications of these datasets on the expected outcomes, improvements in tree-to-tree matching and analysis, integration with growth simulators, and ultimately, the development of growth models driven entirely with point cloud data.


Author(s):  
M. Zaboli ◽  
H. Rastiveis ◽  
A. Shams ◽  
B. Hosseiny ◽  
W. A. Sarasua

Abstract. Automated analysis of three-dimensional (3D) point clouds has become a boon in Photogrammetry, Remote Sensing, Computer Vision, and Robotics. The aim of this paper is to compare classifying algorithms tested on an urban area point cloud acquired by a Mobile Terrestrial Laser Scanning (MTLS) system. The algorithms were tested based on local geometrical and radiometric descriptors. In this study, local descriptors such as linearity, planarity, intensity, etc. are initially extracted for each point by observing their neighbor points. These features are then imported to a classification algorithm to automatically label each point. Here, five powerful classification algorithms including k-Nearest Neighbors (k-NN), Gaussian Naive Bayes (GNB), Support Vector Machine (SVM), Multilayer Perceptron (MLP) Neural Network, and Random Forest (RF) are tested. Eight semantic classes are considered for each method in an equal condition. The best overall accuracy of 90% was achieved with the RF algorithm. The results proved the reliability of the applied descriptors and RF classifier for MTLS point cloud classification.


Sign in / Sign up

Export Citation Format

Share Document