scholarly journals Single-Stage Adaptive Multi-Scale Point Cloud Noise Filtering Algorithm Based on Feature Information

2022 ◽  
Vol 14 (2) ◽  
pp. 367
Author(s):  
Zhen Zheng ◽  
Bingting Zha ◽  
Yu Zhou ◽  
Jinbo Huang ◽  
Youshi Xuchen ◽  
...  

This paper proposes a single-stage adaptive multi-scale noise filtering algorithm for point clouds, based on feature information, which aims to mitigate the fact that the current laser point cloud noise filtering algorithm has difficulty quickly completing the single-stage adaptive filtering of multi-scale noise. The feature information from each point of the point cloud is obtained based on the efficient k-dimensional (k-d) tree data structure and amended normal vector estimation methods, and the adaptive threshold is used to divide the point cloud into large-scale noise, a feature-rich region, and a flat region to reduce the computational time. The large-scale noise is removed directly, the feature-rich and flat regions are filtered via improved bilateral filtering algorithm and weighted average filtering algorithm based on grey relational analysis, respectively. Simulation results show that the proposed algorithm performs better than the state-of-art comparison algorithms. It was, thus, verified that the algorithm proposed in this paper can quickly and adaptively (i) filter out large-scale noise, (ii) smooth small-scale noise, and (iii) effectively maintain the geometric features of the point cloud. The developed algorithm provides research thought for filtering pre-processing methods applicable in 3D measurements, remote sensing, and target recognition based on point clouds.

2021 ◽  
Vol 13 (16) ◽  
pp. 3058
Author(s):  
Rui Gao ◽  
Jisun Park ◽  
Xiaohang Hu ◽  
Seungjun Yang ◽  
Kyungeun Cho

Signals, such as point clouds captured by light detection and ranging sensors, are often affected by highly reflective objects, including specular opaque and transparent materials, such as glass, mirrors, and polished metal, which produce reflection artifacts, thereby degrading the performance of associated computer vision techniques. In traditional noise filtering methods for point clouds, noise is detected by considering the distribution of the neighboring points. However, noise generated by reflected areas is quite dense and cannot be removed by considering the point distribution. Therefore, this paper proposes a noise removal method to detect dense noise points caused by reflected objects using multi-position sensing data comparison. The proposed method is divided into three steps. First, the point cloud data are converted to range images of depth and reflective intensity. Second, the reflected area is detected using a sliding window on two converted range images. Finally, noise is filtered by comparing it with the neighbor sensor data between the detected reflected areas. Experiment results demonstrate that, unlike conventional methods, the proposed method can better filter dense and large-scale noise caused by reflective objects. In future work, we will attempt to add the RGB image to improve the accuracy of noise detection.


2021 ◽  
Vol 11 (5) ◽  
pp. 2268
Author(s):  
Erika Straková ◽  
Dalibor Lukáš ◽  
Zdenko Bobovský ◽  
Tomáš Kot ◽  
Milan Mihola ◽  
...  

While repairing industrial machines or vehicles, recognition of components is a critical and time-consuming task for a human. In this paper, we propose to automatize this task. We start with a Principal Component Analysis (PCA), which fits the scanned point cloud with an ellipsoid by computing the eigenvalues and eigenvectors of a 3-by-3 covariant matrix. In case there is a dominant eigenvalue, the point cloud is decomposed into two clusters to which the PCA is applied recursively. In case the matching is not unique, we continue to distinguish among several candidates. We decompose the point cloud into planar and cylindrical primitives and assign mutual features such as distance or angle to them. Finally, we refine the matching by comparing the matrices of mutual features of the primitives. This is a more computationally demanding but very robust method. We demonstrate the efficiency and robustness of the proposed methodology on a collection of 29 real scans and a database of 389 STL (Standard Triangle Language) models. As many as 27 scans are uniquely matched to their counterparts from the database, while in the remaining two cases, there is only one additional candidate besides the correct model. The overall computational time is about 10 min in MATLAB.


2021 ◽  
Vol 13 (13) ◽  
pp. 2494
Author(s):  
Gaël Kermarrec ◽  
Niklas Schild ◽  
Jan Hartmann

T-splines have recently been introduced to represent objects of arbitrary shapes using a smaller number of control points than the conventional non-uniform rational B-splines (NURBS) or B-spline representatizons in computer-aided design, computer graphics and reverse engineering. They are flexible in representing complex surface shapes and economic in terms of parameters as they enable local refinement. This property is a great advantage when dense, scattered and noisy point clouds are approximated using least squares fitting, such as those from a terrestrial laser scanner (TLS). Unfortunately, when it comes to assessing the goodness of fit of the surface approximation with a real dataset, only a noisy point cloud can be approximated: (i) a low root mean squared error (RMSE) can be linked with an overfitting, i.e., a fitting of the noise, and should be correspondingly avoided, and (ii) a high RMSE is synonymous with a lack of details. To address the challenge of judging the approximation, the reference surface should be entirely known: this can be solved by printing a mathematically defined T-splines reference surface in three dimensions (3D) and modeling the artefacts induced by the 3D printing. Once scanned under different configurations, it is possible to assess the goodness of fit of the approximation for a noisy and potentially gappy point cloud and compare it with the traditional but less flexible NURBS. The advantages of T-splines local refinement open the door for further applications within a geodetic context such as rigorous statistical testing of deformation. Two different scans from a slightly deformed object were approximated; we found that more than 40% of the computational time could be saved without affecting the goodness of fit of the surface approximation by using the same mesh for the two epochs.


2021 ◽  
Vol 10 (3) ◽  
pp. 157
Author(s):  
Paul-Mark DiFrancesco ◽  
David A. Bonneau ◽  
D. Jean Hutchinson

Key to the quantification of rockfall hazard is an understanding of its magnitude-frequency behaviour. Remote sensing has allowed for the accurate observation of rockfall activity, with methods being developed for digitally assembling the monitored occurrences into a rockfall database. A prevalent challenge is the quantification of rockfall volume, whilst fully considering the 3D information stored in each of the extracted rockfall point clouds. Surface reconstruction is utilized to construct a 3D digital surface representation, allowing for an estimation of the volume of space that a point cloud occupies. Given various point cloud imperfections, it is difficult for methods to generate digital surface representations of rockfall with detailed geometry and correct topology. In this study, we tested four different computational geometry-based surface reconstruction methods on a database comprised of 3668 rockfalls. The database was derived from a 5-year LiDAR monitoring campaign of an active rock slope in interior British Columbia, Canada. Each method resulted in a different magnitude-frequency distribution of rockfall. The implications of 3D volume estimation were demonstrated utilizing surface mesh visualization, cumulative magnitude-frequency plots, power-law fitting, and projected annual frequencies of rockfall occurrence. The 3D volume estimation methods caused a notable shift in the magnitude-frequency relations, while the power-law scaling parameters remained relatively similar. We determined that the optimal 3D volume calculation approach is a hybrid methodology comprised of the Power Crust reconstruction and the Alpha Solid reconstruction. The Alpha Solid approach is to be used on small-scale point clouds, characterized with high curvatures relative to their sampling density, which challenge the Power Crust sampling assumptions.


Author(s):  
W. Ostrowski ◽  
M. Pilarska ◽  
J. Charyton ◽  
K. Bakuła

Creating 3D building models in large scale is becoming more popular and finds many applications. Nowadays, a wide term “3D building models” can be applied to several types of products: well-known CityGML solid models (available on few Levels of Detail), which are mainly generated from Airborne Laser Scanning (ALS) data, as well as 3D mesh models that can be created from both nadir and oblique aerial images. City authorities and national mapping agencies are interested in obtaining the 3D building models. Apart from the completeness of the models, the accuracy aspect is also important. Final accuracy of a building model depends on various factors (accuracy of the source data, complexity of the roof shapes, etc.). In this paper the methodology of inspection of dataset containing 3D models is presented. The proposed approach check all building in dataset with comparison to ALS point clouds testing both: accuracy and level of details. Using analysis of statistical parameters for normal heights for reference point cloud and tested planes and segmentation of point cloud provides the tool that can indicate which building and which roof plane in do not fulfill requirement of model accuracy and detail correctness. Proposed method was tested on two datasets: solid and mesh model.


Author(s):  
Zhengyu Chen ◽  
Dong Guan ◽  
Xiaojie Zhang ◽  
Ying Zhang ◽  
Suoqi Zhao ◽  
...  

The molecular conversion of complex mixture involves a large number of species and reactions. The corresponding kinetic model is consist of a series of ordinary differential equations (ODEs) with severe stiffness, leading to an exponentially growing computational time. To reduce the computational time, we proposed a mass-temperature decoupled discretization strategy for a large-scale molecular-level kinetic model. The method separates the mass balance and heat balance calculations in the rigorous adiabatic reactor model and divided the reactor into several isothermal segments. After discretization, the differential equations for heat balance can be replaced by algebraic equations between nodes. We used a molecular-level diesel hydrotreating kinetic model as the case to validate the proposed method. We investigated the effects of temperature estimation methods and node number on the accuracy of the model. A good agreement between the discretization model and rigorous model was observed while the computational time was significantly shortened


2019 ◽  
Vol 12 (1) ◽  
pp. 112 ◽  
Author(s):  
Dong Lin ◽  
Lutz Bannehr ◽  
Christoph Ulrich ◽  
Hans-Gerd Maas

Thermal imagery is widely used in various fields of remote sensing. In this study, a novel processing scheme is developed to process the data acquired by the oblique airborne photogrammetric system AOS-Tx8 consisting of four thermal cameras and four RGB cameras with the goal of large-scale area thermal attribute mapping. In order to merge 3D RGB data and 3D thermal data, registration is conducted in four steps: First, thermal and RGB point clouds are generated independently by applying structure from motion (SfM) photogrammetry to both the thermal and RGB imagery. Next, a coarse point cloud registration is performed by the support of georeferencing data (global positioning system, GPS). Subsequently, a fine point cloud registration is conducted by octree-based iterative closest point (ICP). Finally, three different texture mapping strategies are compared. Experimental results showed that the global image pose refinement outperforms the other two strategies at registration accuracy between thermal imagery and RGB point cloud. Potential building thermal leakages in large areas can be fast detected in the generated texture mapping results. Furthermore, a combination of the proposed workflow and the oblique airborne system allows for a detailed thermal analysis of building roofs and facades.


2020 ◽  
Vol 12 (1) ◽  
pp. 178 ◽  
Author(s):  
Jinming Zhang ◽  
Xiangyun Hu ◽  
Hengming Dai ◽  
ShenRun Qu

It is difficult to extract a digital elevation model (DEM) from an airborne laser scanning (ALS) point cloud in a forest area because of the irregular and uneven distribution of ground and vegetation points. Machine learning, especially deep learning methods, has shown powerful feature extraction in accomplishing point cloud classification. However, most of the existing deep learning frameworks, such as PointNet, dynamic graph convolutional neural network (DGCNN), and SparseConvNet, cannot consider the particularity of ALS point clouds. For large-scene laser point clouds, the current data preprocessing methods are mostly based on random sampling, which is not suitable for DEM extraction tasks. In this study, we propose a novel data sampling algorithm for the data preparation of patch-based training and classification named T-Sampling. T-Sampling uses the set of the lowest points in a certain area as basic points with other points added to supplement it, which can guarantee the integrity of the terrain in the sampling area. In the learning part, we propose a new convolution model based on terrain named Tin-EdgeConv that fully considers the spatial relationship between ground and non-ground points when constructing a directed graph. We design a new network based on Tin-EdgeConv to extract local features and use PointNet architecture to extract global context information. Finally, we combine this information effectively with a designed attention fusion module. These aspects are important in achieving high classification accuracy. We evaluate the proposed method by using large-scale data from forest areas. Results show that our method is more accurate than existing algorithms.


2020 ◽  
Vol 12 (11) ◽  
pp. 1875 ◽  
Author(s):  
Jingwei Zhu ◽  
Joachim Gehrung ◽  
Rong Huang ◽  
Björn Borgmann ◽  
Zhenghao Sun ◽  
...  

In the past decade, a vast amount of strategies, methods, and algorithms have been developed to explore the semantic interpretation of 3D point clouds for extracting desirable information. To assess the performance of the developed algorithms or methods, public standard benchmark datasets should invariably be introduced and used, which serve as an indicator and ruler in the evaluation and comparison. In this work, we introduce and present large-scale Mobile LiDAR point clouds acquired at the city campus of the Technical University of Munich, which have been manually annotated and can be used for the evaluation of related algorithms and methods for semantic point cloud interpretation. We created three datasets from a measurement campaign conducted in April 2016, including a benchmark dataset for semantic labeling, test data for instance segmentation, and test data for annotated single 360 ° laser scans. These datasets cover an urban area of approximately 1 km long roadways and include more than 40 million annotated points with eight classes of objects labeled. Moreover, experiments were carried out with results from several baseline methods compared and analyzed, revealing the quality of this dataset and its effectiveness when using it for performance evaluation.


Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6815
Author(s):  
Cheng Yi ◽  
Dening Lu ◽  
Qian Xie ◽  
Jinxuan Xu ◽  
Jun Wang

Global inspection of large-scale tunnels is a fundamental yet challenging task to ensure the structural stability of tunnels and driving safety. Advanced LiDAR scanners, which sample tunnels into 3D point clouds, are making their debut in the Tunnel Deformation Inspection (TDI). However, the acquired raw point clouds inevitably possess noticeable occlusions, missing areas, and noise/outliers. Considering the tunnel as a geometrical sweeping feature, we propose an effective tunnel deformation inspection algorithm by extracting the global spatial axis from the poor-quality raw point cloud. Essentially, we convert tunnel axis extraction into an iterative fitting optimization problem. Specifically, given the scanned raw point cloud of a tunnel, the initial design axis is sampled to generate a series of normal planes within the corresponding Frenet frame, followed by intersecting those planes with the tunnel point cloud to yield a sequence of cross sections. By fitting cross sections with circles, the fitted circle centers are approximated with a B-Spline curve, which is considered as an updated axis. The procedure of “circle fitting and B-SPline approximation” repeats iteratively until convergency, that is, the distance of each fitted circle center to the current axis is smaller than a given threshold. By this means, the spatial axis of the tunnel can be accurately obtained. Subsequently, according to the practical mechanism of tunnel deformation, we design a segmentation approach to partition cross sections into meaningful pieces, based on which various inspection parameters can be automatically computed regarding to tunnel deformation. A variety of practical experiments have demonstrated the feasibility and effectiveness of our inspection method.


Sign in / Sign up

Export Citation Format

Share Document