scholarly journals Reflective Noise Filtering of Large-Scale Point Cloud Using Multi-Position LiDAR Sensing Data

2021 ◽  
Vol 13 (16) ◽  
pp. 3058
Author(s):  
Rui Gao ◽  
Jisun Park ◽  
Xiaohang Hu ◽  
Seungjun Yang ◽  
Kyungeun Cho

Signals, such as point clouds captured by light detection and ranging sensors, are often affected by highly reflective objects, including specular opaque and transparent materials, such as glass, mirrors, and polished metal, which produce reflection artifacts, thereby degrading the performance of associated computer vision techniques. In traditional noise filtering methods for point clouds, noise is detected by considering the distribution of the neighboring points. However, noise generated by reflected areas is quite dense and cannot be removed by considering the point distribution. Therefore, this paper proposes a noise removal method to detect dense noise points caused by reflected objects using multi-position sensing data comparison. The proposed method is divided into three steps. First, the point cloud data are converted to range images of depth and reflective intensity. Second, the reflected area is detected using a sliding window on two converted range images. Finally, noise is filtered by comparing it with the neighbor sensor data between the detected reflected areas. Experiment results demonstrate that, unlike conventional methods, the proposed method can better filter dense and large-scale noise caused by reflective objects. In future work, we will attempt to add the RGB image to improve the accuracy of noise detection.

2022 ◽  
Vol 14 (2) ◽  
pp. 367
Author(s):  
Zhen Zheng ◽  
Bingting Zha ◽  
Yu Zhou ◽  
Jinbo Huang ◽  
Youshi Xuchen ◽  
...  

This paper proposes a single-stage adaptive multi-scale noise filtering algorithm for point clouds, based on feature information, which aims to mitigate the fact that the current laser point cloud noise filtering algorithm has difficulty quickly completing the single-stage adaptive filtering of multi-scale noise. The feature information from each point of the point cloud is obtained based on the efficient k-dimensional (k-d) tree data structure and amended normal vector estimation methods, and the adaptive threshold is used to divide the point cloud into large-scale noise, a feature-rich region, and a flat region to reduce the computational time. The large-scale noise is removed directly, the feature-rich and flat regions are filtered via improved bilateral filtering algorithm and weighted average filtering algorithm based on grey relational analysis, respectively. Simulation results show that the proposed algorithm performs better than the state-of-art comparison algorithms. It was, thus, verified that the algorithm proposed in this paper can quickly and adaptively (i) filter out large-scale noise, (ii) smooth small-scale noise, and (iii) effectively maintain the geometric features of the point cloud. The developed algorithm provides research thought for filtering pre-processing methods applicable in 3D measurements, remote sensing, and target recognition based on point clouds.


Author(s):  
W. Ostrowski ◽  
M. Pilarska ◽  
J. Charyton ◽  
K. Bakuła

Creating 3D building models in large scale is becoming more popular and finds many applications. Nowadays, a wide term “3D building models” can be applied to several types of products: well-known CityGML solid models (available on few Levels of Detail), which are mainly generated from Airborne Laser Scanning (ALS) data, as well as 3D mesh models that can be created from both nadir and oblique aerial images. City authorities and national mapping agencies are interested in obtaining the 3D building models. Apart from the completeness of the models, the accuracy aspect is also important. Final accuracy of a building model depends on various factors (accuracy of the source data, complexity of the roof shapes, etc.). In this paper the methodology of inspection of dataset containing 3D models is presented. The proposed approach check all building in dataset with comparison to ALS point clouds testing both: accuracy and level of details. Using analysis of statistical parameters for normal heights for reference point cloud and tested planes and segmentation of point cloud provides the tool that can indicate which building and which roof plane in do not fulfill requirement of model accuracy and detail correctness. Proposed method was tested on two datasets: solid and mesh model.


2020 ◽  
Vol 6 (9) ◽  
pp. 94
Author(s):  
Magda Alexandra Trujillo-Jiménez ◽  
Pablo Navarro ◽  
Bruno Pazos ◽  
Leonardo Morales ◽  
Virginia Ramallo ◽  
...  

Current point cloud extraction methods based on photogrammetry generate large amounts of spurious detections that hamper useful 3D mesh reconstructions or, even worse, the possibility of adequate measurements. Moreover, noise removal methods for point clouds are complex, slow and incapable to cope with semantic noise. In this work, we present body2vec, a model-based body segmentation tool that uses a specifically trained Neural Network architecture. Body2vec is capable to perform human body point cloud reconstruction from videos taken on hand-held devices (smartphones or tablets), achieving high quality anthropometric measurements. The main contribution of the proposed workflow is to perform a background removal step, thus avoiding the spurious points generation that is usual in photogrammetric reconstruction. A group of 60 persons were taped with a smartphone, and the corresponding point clouds were obtained automatically with standard photogrammetric methods. We used as a 3D silver standard the clean meshes obtained at the same time with LiDAR sensors post-processed and noise-filtered by expert anthropological biologists. Finally, we used as gold standard anthropometric measurements of the waist and hip of the same people, taken by expert anthropometrists. Applying our method to the raw videos significantly enhanced the quality of the results of the point cloud as compared with the LiDAR-based mesh, and of the anthropometric measurements as compared with the actual hip and waist perimeter measured by the anthropometrists. In both contexts, the resulting quality of body2vec is equivalent to the LiDAR reconstruction.


2019 ◽  
Vol 12 (1) ◽  
pp. 112 ◽  
Author(s):  
Dong Lin ◽  
Lutz Bannehr ◽  
Christoph Ulrich ◽  
Hans-Gerd Maas

Thermal imagery is widely used in various fields of remote sensing. In this study, a novel processing scheme is developed to process the data acquired by the oblique airborne photogrammetric system AOS-Tx8 consisting of four thermal cameras and four RGB cameras with the goal of large-scale area thermal attribute mapping. In order to merge 3D RGB data and 3D thermal data, registration is conducted in four steps: First, thermal and RGB point clouds are generated independently by applying structure from motion (SfM) photogrammetry to both the thermal and RGB imagery. Next, a coarse point cloud registration is performed by the support of georeferencing data (global positioning system, GPS). Subsequently, a fine point cloud registration is conducted by octree-based iterative closest point (ICP). Finally, three different texture mapping strategies are compared. Experimental results showed that the global image pose refinement outperforms the other two strategies at registration accuracy between thermal imagery and RGB point cloud. Potential building thermal leakages in large areas can be fast detected in the generated texture mapping results. Furthermore, a combination of the proposed workflow and the oblique airborne system allows for a detailed thermal analysis of building roofs and facades.


Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1573 ◽  
Author(s):  
Haojie Liu ◽  
Kang Liao ◽  
Chunyu Lin ◽  
Yao Zhao ◽  
Meiqin Liu

LiDAR sensors can provide dependable 3D spatial information at a low frequency (around 10 Hz) and have been widely applied in the field of autonomous driving and unmanned aerial vehicle (UAV). However, the camera with a higher frequency (around 20 Hz) has to be decreased so as to match with LiDAR in a multi-sensor system. In this paper, we propose a novel Pseudo-LiDAR interpolation network (PLIN) to increase the frequency of LiDAR sensor data. PLIN can generate temporally and spatially high-quality point cloud sequences to match the high frequency of cameras. To achieve this goal, we design a coarse interpolation stage guided by consecutive sparse depth maps and motion relationship. We also propose a refined interpolation stage guided by the realistic scene. Using this coarse-to-fine cascade structure, our method can progressively perceive multi-modal information and generate accurate intermediate point clouds. To the best of our knowledge, this is the first deep framework for Pseudo-LiDAR point cloud interpolation, which shows appealing applications in navigation systems equipped with LiDAR and cameras. Experimental results demonstrate that PLIN achieves promising performance on the KITTI dataset, significantly outperforming the traditional interpolation method and the state-of-the-art video interpolation technique.


2020 ◽  
Vol 12 (1) ◽  
pp. 178 ◽  
Author(s):  
Jinming Zhang ◽  
Xiangyun Hu ◽  
Hengming Dai ◽  
ShenRun Qu

It is difficult to extract a digital elevation model (DEM) from an airborne laser scanning (ALS) point cloud in a forest area because of the irregular and uneven distribution of ground and vegetation points. Machine learning, especially deep learning methods, has shown powerful feature extraction in accomplishing point cloud classification. However, most of the existing deep learning frameworks, such as PointNet, dynamic graph convolutional neural network (DGCNN), and SparseConvNet, cannot consider the particularity of ALS point clouds. For large-scene laser point clouds, the current data preprocessing methods are mostly based on random sampling, which is not suitable for DEM extraction tasks. In this study, we propose a novel data sampling algorithm for the data preparation of patch-based training and classification named T-Sampling. T-Sampling uses the set of the lowest points in a certain area as basic points with other points added to supplement it, which can guarantee the integrity of the terrain in the sampling area. In the learning part, we propose a new convolution model based on terrain named Tin-EdgeConv that fully considers the spatial relationship between ground and non-ground points when constructing a directed graph. We design a new network based on Tin-EdgeConv to extract local features and use PointNet architecture to extract global context information. Finally, we combine this information effectively with a designed attention fusion module. These aspects are important in achieving high classification accuracy. We evaluate the proposed method by using large-scale data from forest areas. Results show that our method is more accurate than existing algorithms.


2020 ◽  
Vol 12 (11) ◽  
pp. 1875 ◽  
Author(s):  
Jingwei Zhu ◽  
Joachim Gehrung ◽  
Rong Huang ◽  
Björn Borgmann ◽  
Zhenghao Sun ◽  
...  

In the past decade, a vast amount of strategies, methods, and algorithms have been developed to explore the semantic interpretation of 3D point clouds for extracting desirable information. To assess the performance of the developed algorithms or methods, public standard benchmark datasets should invariably be introduced and used, which serve as an indicator and ruler in the evaluation and comparison. In this work, we introduce and present large-scale Mobile LiDAR point clouds acquired at the city campus of the Technical University of Munich, which have been manually annotated and can be used for the evaluation of related algorithms and methods for semantic point cloud interpretation. We created three datasets from a measurement campaign conducted in April 2016, including a benchmark dataset for semantic labeling, test data for instance segmentation, and test data for annotated single 360 ° laser scans. These datasets cover an urban area of approximately 1 km long roadways and include more than 40 million annotated points with eight classes of objects labeled. Moreover, experiments were carried out with results from several baseline methods compared and analyzed, revealing the quality of this dataset and its effectiveness when using it for performance evaluation.


Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6815
Author(s):  
Cheng Yi ◽  
Dening Lu ◽  
Qian Xie ◽  
Jinxuan Xu ◽  
Jun Wang

Global inspection of large-scale tunnels is a fundamental yet challenging task to ensure the structural stability of tunnels and driving safety. Advanced LiDAR scanners, which sample tunnels into 3D point clouds, are making their debut in the Tunnel Deformation Inspection (TDI). However, the acquired raw point clouds inevitably possess noticeable occlusions, missing areas, and noise/outliers. Considering the tunnel as a geometrical sweeping feature, we propose an effective tunnel deformation inspection algorithm by extracting the global spatial axis from the poor-quality raw point cloud. Essentially, we convert tunnel axis extraction into an iterative fitting optimization problem. Specifically, given the scanned raw point cloud of a tunnel, the initial design axis is sampled to generate a series of normal planes within the corresponding Frenet frame, followed by intersecting those planes with the tunnel point cloud to yield a sequence of cross sections. By fitting cross sections with circles, the fitted circle centers are approximated with a B-Spline curve, which is considered as an updated axis. The procedure of “circle fitting and B-SPline approximation” repeats iteratively until convergency, that is, the distance of each fitted circle center to the current axis is smaller than a given threshold. By this means, the spatial axis of the tunnel can be accurately obtained. Subsequently, according to the practical mechanism of tunnel deformation, we design a segmentation approach to partition cross sections into meaningful pieces, based on which various inspection parameters can be automatically computed regarding to tunnel deformation. A variety of practical experiments have demonstrated the feasibility and effectiveness of our inspection method.


Author(s):  
L. Gézero ◽  
C. Antunes

In the last few years, LiDAR sensors installed in terrestrial vehicles have been revealed as an efficient method to collect very dense 3D georeferenced information. The possibility of creating very dense point clouds representing the surface surrounding the sensor, at a given moment, in a very fast, detailed and easy way, shows the potential of this technology to be used for cartography and digital terrain models production in large scale. However, there are still some limitations associated with the use of this technology. When several acquisitions of the same area with the same device, are made, differences between the clouds can be observed. The range of that differences can go from few centimetres to some several tens of centimetres, mainly in urban and high vegetation areas where the occultation of the GNSS system introduces a degradation of the georeferenced trajectory. Along this article a different method point cloud registration is proposed. In addition to the efficiency and speed of execution, the main advantages of the method are related to the fact that the adjustment is continuously made over the trajectory, based on the GPS time. The process is fully automatic and only information recorded in the standard LAS files is used, without the need for any auxiliary information, in particular regarding the trajectory.


Author(s):  
T. Shinohara ◽  
H. Xiu ◽  
M. Matsuoka

Abstract. This study introduces a novel image to a 3D point-cloud translation method with a conditional generative adversarial network that creates a large-scale 3D point cloud. This can generate supervised point clouds observed via airborne LiDAR from aerial images. The network is composed of an encoder to produce latent features of input images, generator to translate latent features to fake point clouds, and discriminator to classify false or real point clouds. The encoder is a pre-trained ResNet; to overcome the difficulty of generating 3D point clouds in an outdoor scene, we use a FoldingNet with features from ResNet. After a fixed number of iterations, our generator can produce fake point clouds that correspond to the input image. Experimental results show that our network can learn and generate certain point clouds using the data from the 2018 IEEE GRSS Data Fusion Contest.


Sign in / Sign up

Export Citation Format

Share Document