Adaptive Denoising Algorithm for Scanning Beam Points Based on Angle Thresholds

2015 ◽  
Vol 741 ◽  
pp. 204-208
Author(s):  
Ting Jian Dong ◽  
Hua Peng Ding ◽  
Tao Wang ◽  
Hao Wang ◽  
Jin Chen

A local adaptive neighborhood model is proposed in this paper in order to deal with the mistake judgment in the existing scanning beam point cloud denoising algorithms. Such a model regards larger curvatures as the potential noises, can select angle thresholds of noise points and the median values of filtering windows adaptively, so as address the issues of mistake judgment and missing judgment of the point clouds denoising algorithms with different curvatures. The adaption theory in the angle threshold denoising algorithm classifies the noise points and data points. Therefore, it can ensure the smoothness in low frequency, and as well keep the high frequency characteristics. The new method improves the accuracy of median filtering, prevents the diffusion of noise, remove noises effectively while preserving sharp features, and avoid fuzzy data margin.

Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1573 ◽  
Author(s):  
Haojie Liu ◽  
Kang Liao ◽  
Chunyu Lin ◽  
Yao Zhao ◽  
Meiqin Liu

LiDAR sensors can provide dependable 3D spatial information at a low frequency (around 10 Hz) and have been widely applied in the field of autonomous driving and unmanned aerial vehicle (UAV). However, the camera with a higher frequency (around 20 Hz) has to be decreased so as to match with LiDAR in a multi-sensor system. In this paper, we propose a novel Pseudo-LiDAR interpolation network (PLIN) to increase the frequency of LiDAR sensor data. PLIN can generate temporally and spatially high-quality point cloud sequences to match the high frequency of cameras. To achieve this goal, we design a coarse interpolation stage guided by consecutive sparse depth maps and motion relationship. We also propose a refined interpolation stage guided by the realistic scene. Using this coarse-to-fine cascade structure, our method can progressively perceive multi-modal information and generate accurate intermediate point clouds. To the best of our knowledge, this is the first deep framework for Pseudo-LiDAR point cloud interpolation, which shows appealing applications in navigation systems equipped with LiDAR and cameras. Experimental results demonstrate that PLIN achieves promising performance on the KITTI dataset, significantly outperforming the traditional interpolation method and the state-of-the-art video interpolation technique.


2021 ◽  
Vol 16 (4) ◽  
pp. 579-587
Author(s):  
Pitisit Dillon ◽  
Pakinee Aimmanee ◽  
Akihiko Wakai ◽  
Go Sato ◽  
Hoang Viet Hung ◽  
...  

The density-based spatial clustering of applications with noise (DBSCAN) algorithm is a well-known algorithm for spatial-clustering data point clouds. It can be applied to many applications, such as crack detection, rockfall detection, and glacier movement detection. Traditional DBSCAN requires two predefined parameters. Suitable values of these parameters depend upon the distribution of the input point cloud. Therefore, estimating these parameters is challenging. This paper proposed a new version of DBSCAN that can automatically customize the parameters. The proposed method consists of two processes: initial parameter estimation based on grid analysis and DBSCAN based on the divide-and-conquer (DC-DBSCAN) approach, which repeatedly performs DBSCAN on each cluster separately and recursively. To verify the proposed method, we applied it to a 3D point cloud dataset that was used to analyze rockfall events at the Puiggcercos cliff, Spain. The total number of data points used in this study was 15,567. The experimental results show that the proposed method is better than the traditional DBSCAN in terms of purity and NMI scores. The purity scores of the proposed method and the traditional DBSCAN method were 96.22% and 91.09%, respectively. The NMI scores of the proposed method and the traditional DBSCAN method are 0.78 and 0.49, respectively. Also, it can detect events that traditional DBSCAN cannot detect.


Author(s):  
Lindsay MacDonald ◽  
Isabella Toschi ◽  
Erica Nocerino ◽  
Mona Hess ◽  
Fabio Remondino ◽  
...  

The accuracy of 3D surface reconstruction was compared from image sets of a Metric Test Object taken in an illumination dome by two methods: photometric stereo and improved structure-from-motion (SfM), using point cloud data from a 3D colour laser scanner as the reference. Metrics included pointwise height differences over the digital elevation model (DEM), and 3D Euclidean differences between corresponding points. The enhancement of spatial detail was investigated by blending high frequency detail from photometric normals, after a Poisson surface reconstruction, with low frequency detail from a DEM derived from SfM.


Author(s):  
Lindsay MacDonald ◽  
Isabella Toschi ◽  
Erica Nocerino ◽  
Mona Hess ◽  
Fabio Remondino ◽  
...  

The accuracy of 3D surface reconstruction was compared from image sets of a Metric Test Object taken in an illumination dome by two methods: photometric stereo and improved structure-from-motion (SfM), using point cloud data from a 3D colour laser scanner as the reference. Metrics included pointwise height differences over the digital elevation model (DEM), and 3D Euclidean differences between corresponding points. The enhancement of spatial detail was investigated by blending high frequency detail from photometric normals, after a Poisson surface reconstruction, with low frequency detail from a DEM derived from SfM.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Diqun Yan ◽  
Yongkang Gong ◽  
Tianyun Liu

Resampling is an operation to convert a digital speech from a given sampling rate to a different one. It can be used to interface two systems with different sampling rates. Unfortunately, resampling may also be intentionally utilized as a postoperation to remove the manipulated artifacts left by pitch shifting, splicing, etc. To detect the resampling, some forensic detectors have been proposed. Little consideration, however, has been given to the security of these detectors themselves. To expose weaknesses of these resampling detectors and hide the resampling artifacts, a dual-path resampling antiforensic framework is proposed in this paper. In the proposed framework, 1D median filtering is utilized to destroy the linear correlation between the adjacent speech samples introduced by resampling on low-frequency component. And for high-frequency component, Gaussian white noise perturbation (GWNP) is adopted to destroy the periodic resampling traces. The experimental results show that the proposed method successfully deceives the existing resampling forensic algorithms while keeping good perceptual quality of the resampled speech.


Author(s):  
Lee J. Wells ◽  
Mohammed S. Shafae ◽  
Jaime A. Camelio

Ever advancing sensor and measurement technologies continually provide new opportunities for knowledge discovery and quality control (QC) strategies for complex manufacturing systems. One such state-of-the-art measurement technology currently being implemented in industry is the 3D laser scanner, which can rapidly provide millions of data points to represent an entire manufactured part’s surface. This gives 3D laser scanners a significant advantage over competing technologies that typically provide tens or hundreds of data points. Consequently, data collected from 3D laser scanners have a great potential to be used for inspecting parts for surface and feature abnormalities. The current use of 3D point clouds for part inspection falls into two main categories; 1) Extracting feature parameters, which does not complement the nature of 3D point clouds as it wastes valuable data and 2) An ad-hoc manual process where a visual representation of a point cloud (usually as deviations from nominal) is analyzed, which tends to suffer from slow, inefficient, and inconsistent inspection results. Therefore our paper proposes an approach to automate the latter approach to 3D point cloud inspection. The proposed approach uses a newly developed adaptive generalized likelihood ratio (AGLR) technique to identify the most likely size, shape, and magnitude of a potential fault within the point cloud, which transforms the ad-hoc visual inspection approach to a statistically viable automated inspection solution. In order to aid practitioners in designing and implementing an AGLR-based inspection process, our paper also reports the performance of the AGLR with respect to the probability of detecting specific size and magnitude faults in addition to the probability of a false alarms.


2021 ◽  
Vol 30 ◽  
pp. 126-130
Author(s):  
Jan Voříšek ◽  
Bořek Patzák ◽  
Edita Dvořáková ◽  
Daniel Rypl

Laser scanning is used widely in architecture and construction to document existing buildings by providing accurate data for creating a 3D model. The output is a set of data points in space, so-called point cloud. While point clouds can be directly rendered and inspected, they do not hold any semantics. Typically, engineers manually obtain floor plans, structural models, or the whole BIM model, which is a very time-consuming task for large building projects. In this contribution, we present the design and concept of a PointCloud2BIM library [1]. It provides a set of algorithms for automated or user assisted detection of fundamental entities from scanned point cloud data sets, such as floors, rooms, walls, and openings, and identification of the mutual relationships between them. The entity detection is based on a reasonable degree of human interaction (i.e., expected wall thickness). The results reside in a platform-agnostic JSON database allowing future integration into any existing BIM software.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Linh Truong-Hong ◽  
Roderik Lindenbergh ◽  
Thu Anh Nguyen

PurposeTerrestrial laser scanning (TLS) point clouds have been widely used in deformation measurement for structures. However, reliability and accuracy of resulting deformation estimation strongly depends on quality of each step of a workflow, which are not fully addressed. This study aims to give insight error of these steps, and results of the study would be guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds. Thus, the main contributions of the paper are investigating point cloud registration error affecting resulting deformation estimation, identifying an appropriate segmentation method used to extract data points of a deformed surface, investigating a methodology to determine an un-deformed or a reference surface for estimating deformation, and proposing a methodology to minimize the impact of outlier, noisy data and/or mixed pixels on deformation estimation.Design/methodology/approachIn practice, the quality of data point clouds and of surface extraction strongly impacts on resulting deformation estimation based on laser scanning point clouds, which can cause an incorrect decision on the state of the structure if uncertainty is available. In an effort to have more comprehensive insight into those impacts, this study addresses four issues: data errors due to data registration from multiple scanning stations (Issue 1), methods used to extract point clouds of structure surfaces (Issue 2), selection of the reference surface Sref to measure deformation (Issue 3), and available outlier and/or mixed pixels (Issue 4). This investigation demonstrates through estimating deformation of the bridge abutment, building and an oil storage tank.FindingsThe study shows that both random sample consensus (RANSAC) and region growing–based methods [a cell-based/voxel-based region growing (CRG/VRG)] can be extracted data points of surfaces, but RANSAC is only applicable for a primary primitive surface (e.g. a plane in this study) subjected to a small deformation (case study 2 and 3) and cannot eliminate mixed pixels. On another hand, CRG and VRG impose a suitable method applied for deformed, free-form surfaces. In addition, in practice, a reference surface of a structure is mostly not available. The use of a fitting plane based on a point cloud of a current surface would cause unrealistic and inaccurate deformation because outlier data points and data points of damaged areas affect an accuracy of the fitting plane. This study would recommend the use of a reference surface determined based on a design concept/specification. A smoothing method with a spatial interval can be effectively minimize, negative impact of outlier, noisy data and/or mixed pixels on deformation estimation.Research limitations/implicationsDue to difficulty in logistics, an independent measurement cannot be established to assess the deformation accuracy based on TLS data point cloud in the case studies of this research. However, common laser scanners using the time-of-flight or phase-shift principle provide point clouds with accuracy in the order of 1–6 mm, while the point clouds of triangulation scanners have sub-millimetre accuracy.Practical implicationsThis study aims to give insight error of these steps, and the results of the study would be guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds.Social implicationsThe results of this study would provide guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds. A low-cost method can be applied for deformation analysis of the structure.Originality/valueAlthough a large amount of the studies used laser scanning to measure structure deformation in the last two decades, the methods mainly applied were to measure change between two states (or epochs) of the structure surface and focused on quantifying deformation-based TLS point clouds. Those studies proved that a laser scanner could be an alternative unit to acquire spatial information for deformation monitoring. However, there are still challenges in establishing an appropriate procedure to collect a high quality of point clouds and develop methods to interpret the point clouds to obtain reliable and accurate deformation, when uncertainty, including data quality and reference information, is available. Therefore, this study demonstrates the impact of data quality in a term of point cloud registration error, selected methods for extracting point clouds of surfaces, identifying reference information, and available outlier, noisy data and/or mixed pixels on deformation estimation.


Author(s):  
Mojahed Alkhateeb ◽  
Jeremy L. Rickli ◽  
Nicholas J. Christoforou

Abstract A point cloud is a digital representation of a part that consists of a set of data points in space. Typically point clouds are produced by 3D scanners that hover above a part and records points in a large number that represent the external surface of a part. Additive remanufacturing offers a sustainable solution to end-of-use (EoU) core disposal and recovery and requires quantification of part damage or wear that requires reprocessing. This paper proposes an error propagation approach that models the interaction of each step of the additive remanufacturing process. This proposed model is formulated, and the results of the errors generated from the parameters of the scanner and point cloud smoothing are presented. Smoothing is an important step to reduce the noises generated from scanning, knowing the right smoothing factor is important since over smoothing results in dimensional inaccuracies and errors, especially in cores with smaller degrees of damage. It is important to know the error generated from scanning and point cloud smoothing to compensate in the following steps and generate appropriate material deposition paths. Inaccuracies in the 3D model renders can impact the remainder of the additive remanufacturing accuracy, especially because there are multiple steps in the process. Sources of error from smoothing, meshing, slicing, and material deposition are proposed in the error propagation model for additive remanufacturing. Results of efforts to quantify the scanning and smoothing steps within this model are presented.


2015 ◽  
Vol 3 (2) ◽  
pp. 102-111 ◽  
Author(s):  
Kai Wah Lee ◽  
Pengbo Bo

Abstract In this paper, we study the problem of computing smooth feature curves from CAD type point clouds models. The proposed method reconstructs feature curves from the intersections of developable strip pairs which approximate the regions along both sides of the features. The generation of developable surfaces is based on a linear approximation of the given point cloud through a variational shape approximation approach. A line segment sequencing algorithm is proposed for collecting feature line segments into different feature sequences as well as sequential groups of data points. A developable surface approximation procedure is employed to refine incident approximation planes of data points into developable strips. Some experimental results are included to demonstrate the performance of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document