scholarly journals A Novel Recursive Non-Parametric DBSCAN Algorithm for 3D Data Analysis with an Application in Rockfall Detection

2021 ◽  
Vol 16 (4) ◽  
pp. 579-587
Author(s):  
Pitisit Dillon ◽  
Pakinee Aimmanee ◽  
Akihiko Wakai ◽  
Go Sato ◽  
Hoang Viet Hung ◽  
...  

The density-based spatial clustering of applications with noise (DBSCAN) algorithm is a well-known algorithm for spatial-clustering data point clouds. It can be applied to many applications, such as crack detection, rockfall detection, and glacier movement detection. Traditional DBSCAN requires two predefined parameters. Suitable values of these parameters depend upon the distribution of the input point cloud. Therefore, estimating these parameters is challenging. This paper proposed a new version of DBSCAN that can automatically customize the parameters. The proposed method consists of two processes: initial parameter estimation based on grid analysis and DBSCAN based on the divide-and-conquer (DC-DBSCAN) approach, which repeatedly performs DBSCAN on each cluster separately and recursively. To verify the proposed method, we applied it to a 3D point cloud dataset that was used to analyze rockfall events at the Puiggcercos cliff, Spain. The total number of data points used in this study was 15,567. The experimental results show that the proposed method is better than the traditional DBSCAN in terms of purity and NMI scores. The purity scores of the proposed method and the traditional DBSCAN method were 96.22% and 91.09%, respectively. The NMI scores of the proposed method and the traditional DBSCAN method are 0.78 and 0.49, respectively. Also, it can detect events that traditional DBSCAN cannot detect.

Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3729 ◽  
Author(s):  
Shuai Wang ◽  
Hua-Yan Sun ◽  
Hui-Chao Guo ◽  
Lin Du ◽  
Tian-Jian Liu

Global registration is an important step in the three-dimensional reconstruction of multi-view laser point clouds for moving objects, but the severe noise, density variation, and overlap ratio between multi-view laser point clouds present significant challenges to global registration. In this paper, a multi-view laser point cloud global registration method based on low-rank sparse decomposition is proposed. Firstly, the spatial distribution features of point clouds were extracted by spatial rasterization to realize loop-closure detection, and the corresponding weight matrix was established according to the similarities of spatial distribution features. The accuracy of adjacent registration transformation was evaluated, and the robustness of low-rank sparse matrix decomposition was enhanced. Then, the objective function that satisfies the global optimization condition was constructed, which prevented the solution space compression generated by the column-orthogonal hypothesis of the matrix. The objective function was solved by the Augmented Lagrange method, and the iterative termination condition was designed according to the prior conditions of single-object global registration. The simulation analysis shows that the proposed method was robust with a wide range of parameters, and the accuracy of loop-closure detection was over 90%. When the pairwise registration error was below 0.1 rad, the proposed method performed better than the three compared methods, and the global registration accuracy was better than 0.05 rad. Finally, the global registration results of real point cloud experiments further proved the validity and stability of the proposed method.


2020 ◽  
pp. 002029402091986
Author(s):  
Xiaocui Yuan ◽  
Huawei Chen ◽  
Baoling Liu

Clustering analysis is one of the most important techniques in point cloud processing, such as registration, segmentation, and outlier detection. However, most of the existing clustering algorithms exhibit a low computational efficiency with the high demand for computational resources, especially for large data processing. Sometimes, clusters and outliers are inseparable, especially for those point clouds with outliers. Most of the cluster-based algorithms can well identify cluster outliers but sparse outliers. We develop a novel clustering method, called spatial neighborhood connected region labeling. The method defines spatial connectivity criterion, finds points connections based on the connectivity criterion among the k-nearest neighborhood region and classifies connected points to the same cluster. Our method can accurately and quickly classify datasets using only one parameter k. Comparing with K-means, hierarchical clustering and density-based spatial clustering of applications with noise methods, our method provides better accuracy using less computational time for data clustering. For applications in the outlier detection of the point cloud, our method can identify not only cluster outliers, but also sparse outliers. More accurate detection results are achieved compared to the state-of-art outlier detection methods, such as local outlier factor and density-based spatial clustering of applications with noise.


Author(s):  
E. Lachat ◽  
T. Landes ◽  
P. Grussenmeyer

The combination of data coming from multiple sensors is more and more applied for remote sensing issues (multi-sensor imagery) but also in cultural heritage or robotics, since it often results in increased robustness and accuracy of the final data. In this paper, the reconstruction of building elements such as window frames or door jambs scanned thanks to a low cost 3D sensor (Kinect v2) is presented. Their combination within a global point cloud of an indoor scene acquired with a terrestrial laser scanner (TLS) is considered. If the added elements acquired with the Kinect sensor enable to reach a better level of detail of the final model, an adapted acquisition protocol may also provide several benefits as for example time gain. The paper aims at analyzing whether the two measurement techniques can be complementary in this context. The limitations encountered during the acquisition and reconstruction steps are also investigated.


Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2309
Author(s):  
Yifei Tian ◽  
Wei Song ◽  
Long Chen ◽  
Yunsick Sung ◽  
Jeonghoon Kwak ◽  
...  

Fast and accurate obstacle detection is essential for accurate perception of mobile vehicles’ environment. Because point clouds sensed by light detection and ranging (LiDAR) sensors are sparse and unstructured, traditional obstacle clustering on raw point clouds are inaccurate and time consuming. Thus, to achieve fast obstacle clustering in an unknown terrain, this paper proposes an elevation-reference connected component labeling (ER-CCL) algorithm using graphic processing unit (GPU) programing. LiDAR points are first projected onto a rasterized x–z plane so that sparse points are mapped into a series of regularly arranged small cells. Based on the height distribution of the LiDAR point, the ground cells are filtered out and a flag map is generated. Next, the ER-CCL algorithm is implemented on the label map generated from the flag map to mark individual clusters with unique labels. Finally, obstacle labeling results are inverse transformed from the x–z plane to 3D points to provide clustering results. For real-time 3D point cloud clustering, ER-CCL is accelerated by running it in parallel with the aid of GPU programming technology.


Sensors ◽  
2018 ◽  
Vol 18 (12) ◽  
pp. 4398 ◽  
Author(s):  
Soohee Han

The present study introduces an efficient algorithm to construct a file-based octree for a large 3D point cloud. However, the algorithm was very slow compared with a memory-based approach, and got even worse when using a 3D point cloud scanned in longish objects like tunnels and corridors. The defects were addressed by implementing a semi-isometric octree group. The approach implements several semi-isometric octrees in a group, which tightly covers the 3D point cloud, though each octree along with its leaf node still maintains an isometric shape. The proposed approach was tested using three 3D point clouds captured in a long tunnel and a short tunnel by a terrestrial laser scanner, and in an urban area by an airborne laser scanner. The experimental results showed that the performance of the semi-isometric approach was not worse than a memory-based approach, and quite a lot better than a file-based one. Thus, it was proven that the proposed semi-isometric approach achieves a good balance between query performance and memory efficiency. In conclusion, if given enough main memory and using a moderately sized 3D point cloud, a memory-based approach is preferable. When the 3D point cloud is larger than the main memory, a file-based approach seems to be the inevitable choice, however, the semi-isometric approach is the better option.


Author(s):  
Lee J. Wells ◽  
Mohammed S. Shafae ◽  
Jaime A. Camelio

Ever advancing sensor and measurement technologies continually provide new opportunities for knowledge discovery and quality control (QC) strategies for complex manufacturing systems. One such state-of-the-art measurement technology currently being implemented in industry is the 3D laser scanner, which can rapidly provide millions of data points to represent an entire manufactured part’s surface. This gives 3D laser scanners a significant advantage over competing technologies that typically provide tens or hundreds of data points. Consequently, data collected from 3D laser scanners have a great potential to be used for inspecting parts for surface and feature abnormalities. The current use of 3D point clouds for part inspection falls into two main categories; 1) Extracting feature parameters, which does not complement the nature of 3D point clouds as it wastes valuable data and 2) An ad-hoc manual process where a visual representation of a point cloud (usually as deviations from nominal) is analyzed, which tends to suffer from slow, inefficient, and inconsistent inspection results. Therefore our paper proposes an approach to automate the latter approach to 3D point cloud inspection. The proposed approach uses a newly developed adaptive generalized likelihood ratio (AGLR) technique to identify the most likely size, shape, and magnitude of a potential fault within the point cloud, which transforms the ad-hoc visual inspection approach to a statistically viable automated inspection solution. In order to aid practitioners in designing and implementing an AGLR-based inspection process, our paper also reports the performance of the AGLR with respect to the probability of detecting specific size and magnitude faults in addition to the probability of a false alarms.


2021 ◽  
Vol 30 ◽  
pp. 126-130
Author(s):  
Jan Voříšek ◽  
Bořek Patzák ◽  
Edita Dvořáková ◽  
Daniel Rypl

Laser scanning is used widely in architecture and construction to document existing buildings by providing accurate data for creating a 3D model. The output is a set of data points in space, so-called point cloud. While point clouds can be directly rendered and inspected, they do not hold any semantics. Typically, engineers manually obtain floor plans, structural models, or the whole BIM model, which is a very time-consuming task for large building projects. In this contribution, we present the design and concept of a PointCloud2BIM library [1]. It provides a set of algorithms for automated or user assisted detection of fundamental entities from scanned point cloud data sets, such as floors, rooms, walls, and openings, and identification of the mutual relationships between them. The entity detection is based on a reasonable degree of human interaction (i.e., expected wall thickness). The results reside in a platform-agnostic JSON database allowing future integration into any existing BIM software.


Author(s):  
A. Kharroubi ◽  
R. Hajji ◽  
R. Billen ◽  
F. Poux

Abstract. With the increasing volume of 3D applications using immersive technologies such as virtual, augmented and mixed reality, it is very interesting to create better ways to integrate unstructured 3D data such as point clouds as a source of data. Indeed, this can lead to an efficient workflow from 3D capture to 3D immersive environment creation without the need to derive 3D model, and lengthy optimization pipelines. In this paper, the main focus is on the direct classification and integration of massive 3D point clouds in a virtual reality (VR) environment. The emphasis is put on leveraging open-source frameworks for an easy replication of the findings. First, we develop a semi-automatic segmentation approach to provide semantic descriptors (mainly classes) to groups of points. We then build an octree data structure leveraged through out-of-core algorithms to load in real time and continuously only the points that are in the VR user's field of view. Then, we provide an open-source solution using Unity with a user interface for VR point cloud interaction and visualisation. Finally, we provide a full semantic VR data integration enhanced through developed shaders for future spatio-semantic queries. We tested our approach on several datasets of which a point cloud composed of 2.3 billion points, representing the heritage site of the castle of Jehay (Belgium). The results underline the efficiency and performance of the solution for visualizing classifieds massive point clouds in virtual environments with more than 100 frame per second.


Author(s):  
L. Jurjević ◽  
M. Gašparović

Development of the technology in the area of the cameras, computers and algorithms for 3D the reconstruction of the objects from the images resulted in the increased popularity of the photogrammetry. Algorithms for the 3D model reconstruction are so advanced that almost anyone can make a 3D model of photographed object. The main goal of this paper is to examine the possibility of obtaining 3D data for the purposes of the close-range photogrammetry applications, based on the open source technologies. All steps of obtaining 3D point cloud are covered in this paper. Special attention is given to the camera calibration, for which two-step process of calibration is used. Both, presented algorithm and accuracy of the point cloud are tested by calculating the spatial difference between referent and produced point clouds. During algorithm testing, robustness and swiftness of obtaining 3D data is noted, and certainly usage of this and similar algorithms has a lot of potential in the real-time application. That is the reason why this research can find its application in the architecture, spatial planning, protection of cultural heritage, forensic, mechanical engineering, traffic management, medicine and other sciences.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Linh Truong-Hong ◽  
Roderik Lindenbergh ◽  
Thu Anh Nguyen

PurposeTerrestrial laser scanning (TLS) point clouds have been widely used in deformation measurement for structures. However, reliability and accuracy of resulting deformation estimation strongly depends on quality of each step of a workflow, which are not fully addressed. This study aims to give insight error of these steps, and results of the study would be guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds. Thus, the main contributions of the paper are investigating point cloud registration error affecting resulting deformation estimation, identifying an appropriate segmentation method used to extract data points of a deformed surface, investigating a methodology to determine an un-deformed or a reference surface for estimating deformation, and proposing a methodology to minimize the impact of outlier, noisy data and/or mixed pixels on deformation estimation.Design/methodology/approachIn practice, the quality of data point clouds and of surface extraction strongly impacts on resulting deformation estimation based on laser scanning point clouds, which can cause an incorrect decision on the state of the structure if uncertainty is available. In an effort to have more comprehensive insight into those impacts, this study addresses four issues: data errors due to data registration from multiple scanning stations (Issue 1), methods used to extract point clouds of structure surfaces (Issue 2), selection of the reference surface Sref to measure deformation (Issue 3), and available outlier and/or mixed pixels (Issue 4). This investigation demonstrates through estimating deformation of the bridge abutment, building and an oil storage tank.FindingsThe study shows that both random sample consensus (RANSAC) and region growing–based methods [a cell-based/voxel-based region growing (CRG/VRG)] can be extracted data points of surfaces, but RANSAC is only applicable for a primary primitive surface (e.g. a plane in this study) subjected to a small deformation (case study 2 and 3) and cannot eliminate mixed pixels. On another hand, CRG and VRG impose a suitable method applied for deformed, free-form surfaces. In addition, in practice, a reference surface of a structure is mostly not available. The use of a fitting plane based on a point cloud of a current surface would cause unrealistic and inaccurate deformation because outlier data points and data points of damaged areas affect an accuracy of the fitting plane. This study would recommend the use of a reference surface determined based on a design concept/specification. A smoothing method with a spatial interval can be effectively minimize, negative impact of outlier, noisy data and/or mixed pixels on deformation estimation.Research limitations/implicationsDue to difficulty in logistics, an independent measurement cannot be established to assess the deformation accuracy based on TLS data point cloud in the case studies of this research. However, common laser scanners using the time-of-flight or phase-shift principle provide point clouds with accuracy in the order of 1–6 mm, while the point clouds of triangulation scanners have sub-millimetre accuracy.Practical implicationsThis study aims to give insight error of these steps, and the results of the study would be guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds.Social implicationsThe results of this study would provide guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds. A low-cost method can be applied for deformation analysis of the structure.Originality/valueAlthough a large amount of the studies used laser scanning to measure structure deformation in the last two decades, the methods mainly applied were to measure change between two states (or epochs) of the structure surface and focused on quantifying deformation-based TLS point clouds. Those studies proved that a laser scanner could be an alternative unit to acquire spatial information for deformation monitoring. However, there are still challenges in establishing an appropriate procedure to collect a high quality of point clouds and develop methods to interpret the point clouds to obtain reliable and accurate deformation, when uncertainty, including data quality and reference information, is available. Therefore, this study demonstrates the impact of data quality in a term of point cloud registration error, selected methods for extracting point clouds of surfaces, identifying reference information, and available outlier, noisy data and/or mixed pixels on deformation estimation.


Sign in / Sign up

Export Citation Format

Share Document