Ground filtering algorithm for mobile LIDAR using order and neighborhood point information

2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Siyuan Huang ◽  
Limin Liu ◽  
Jian Dong ◽  
Xiongjun Fu ◽  
Leilei Jia

Purpose Most of the existing ground filtering algorithms are based on the Cartesian coordinate system, which is not compatible with the working principle of mobile light detection and ranging and difficult to obtain good filtering accuracy. The purpose of this paper is to improve the accuracy of ground filtering by making full use of the order information between the point and the point in the spherical coordinate. Design/methodology/approach First, the cloth simulation (CS) algorithm is modified into a sorting algorithm for scattered point clouds to obtain the adjacent relationship of the point clouds and to generate a matrix containing the adjacent information of the point cloud. Then, according to the adjacent information of the points, a projection distance comparison and local slope analysis are simultaneously performed. These results are integrated to process the point cloud details further and the algorithm is finally used to filter a point cloud in a scene from the KITTI data set. Findings The results show that the accuracy of KITTI point cloud sorting is 96.3% and the kappa coefficient of the ground filtering result is 0.7978. Compared with other algorithms applied to the same scene, the proposed algorithm has higher processing accuracy. Research limitations/implications Steps of the algorithm are parallel computing, which saves time owing to the small amount of computation. In addition, the generality of the algorithm is improved and it could be used for different data sets from urban streets. However, due to the lack of point clouds from the field environment with labeled ground points, the filtering result of this algorithm in the field environment needs further study. Originality/value In this study, the point cloud neighboring information was obtained by a modified CS algorithm. The ground filtering algorithm distinguish ground points and off-ground points according to the flatness, continuity and minimality of ground points in point cloud data. In addition, it has little effect on the algorithm results if thresholds were changed.

2019 ◽  
Vol 9 (16) ◽  
pp. 3273 ◽  
Author(s):  
Wen-Chung Chang ◽  
Van-Toan Pham

This paper develops a registration architecture for the purpose of estimating relative pose including the rotation and the translation of an object in terms of a model in 3-D space based on 3-D point clouds captured by a 3-D camera. Particularly, this paper addresses the time-consuming problem of 3-D point cloud registration which is essential for the closed-loop industrial automated assembly systems that demand fixed time for accurate pose estimation. Firstly, two different descriptors are developed in order to extract coarse and detailed features of these point cloud data sets for the purpose of creating training data sets according to diversified orientations. Secondly, in order to guarantee fast pose estimation in fixed time, a seemingly novel registration architecture by employing two consecutive convolutional neural network (CNN) models is proposed. After training, the proposed CNN architecture can estimate the rotation between the model point cloud and a data point cloud, followed by the translation estimation based on computing average values. By covering a smaller range of uncertainty of the orientation compared with a full range of uncertainty covered by the first CNN model, the second CNN model can precisely estimate the orientation of the 3-D point cloud. Finally, the performance of the algorithm proposed in this paper has been validated by experiments in comparison with baseline methods. Based on these results, the proposed algorithm significantly reduces the estimation time while maintaining high precision.


2020 ◽  
Vol 12 (18) ◽  
pp. 2884
Author(s):  
Qingwang Liu ◽  
Liyong Fu ◽  
Qiao Chen ◽  
Guangxing Wang ◽  
Peng Luo ◽  
...  

Forest canopy height is one of the most important spatial characteristics for forest resource inventories and forest ecosystem modeling. Light detection and ranging (LiDAR) can be used to accurately detect canopy surface and terrain information from the backscattering signals of laser pulses, while photogrammetry tends to accurately depict the canopy surface envelope. The spatial differences between the canopy surfaces estimated by LiDAR and photogrammetry have not been investigated in depth. Thus, this study aims to assess LiDAR and photogrammetry point clouds and analyze the spatial differences in canopy heights. The study site is located in the Jigongshan National Nature Reserve of Henan Province, Central China. Six data sets, including one LiDAR data set and five photogrammetry data sets captured from an unmanned aerial vehicle (UAV), were used to estimate the forest canopy heights. Three spatial distribution descriptors, namely, the effective cell ratio (ECR), point cloud homogeneity (PCH) and point cloud redundancy (PCR), were developed to assess the LiDAR and photogrammetry point clouds in the grid. The ordinary neighbor (ON) and constrained neighbor (CN) interpolation algorithms were used to fill void cells in digital surface models (DSMs) and canopy height models (CHMs). The CN algorithm could be used to distinguish small and large holes in the CHMs. The optimal spatial resolution was analyzed according to the ECR changes of DSMs or CHMs resulting from the CN algorithms. Large negative and positive variations were observed between the LiDAR and photogrammetry canopy heights. The stratified mean difference in canopy heights increased gradually from negative to positive when the canopy heights were greater than 3 m, which means that photogrammetry tends to overestimate low canopy heights and underestimate high canopy heights. The CN interpolation algorithm achieved smaller relative root mean square errors than the ON interpolation algorithm. This article provides an operational method for the spatial assessment of point clouds and suggests that the variations between LiDAR and photogrammetry CHMs should be considered when modeling forest parameters.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Liang Gong ◽  
Xiaofeng Du ◽  
Kai Zhu ◽  
Ke Lin ◽  
Qiaojun Lou ◽  
...  

The automated measurement of crop phenotypic parameters is of great significance to the quantitative study of crop growth. The segmentation and classification of crop point cloud help to realize the automation of crop phenotypic parameter measurement. At present, crop spike-shaped point cloud segmentation has problems such as fewer samples, uneven distribution of point clouds, occlusion of stem and spike, disorderly arrangement of point clouds, and lack of targeted network models. The traditional clustering method can realize the segmentation of the plant organ point cloud with relatively independent spatial location, but the accuracy is not acceptable. This paper first builds a desktop-level point cloud scanning apparatus based on a structured-light projection module to facilitate the point cloud acquisition process. Then, the rice ear point cloud was collected, and the rice ear point cloud data set was made. In addition, data argumentation is used to improve sample utilization efficiency and training accuracy. Finally, a 3D point cloud convolutional neural network model called Panicle-3D was designed to achieve better segmentation accuracy. Specifically, the design of Panicle-3D is aimed at the multiscale characteristics of plant organs, combined with the structure of PointConv and long and short jumps, which accelerates the convergence speed of the network and reduces the loss of features in the process of point cloud downsampling. After comparison experiments, the segmentation accuracy of Panicle-3D reaches 93.4%, which is higher than PointNet. Panicle-3D is suitable for other similar crop point cloud segmentation tasks.


2021 ◽  
Vol 30 ◽  
pp. 126-130
Author(s):  
Jan Voříšek ◽  
Bořek Patzák ◽  
Edita Dvořáková ◽  
Daniel Rypl

Laser scanning is used widely in architecture and construction to document existing buildings by providing accurate data for creating a 3D model. The output is a set of data points in space, so-called point cloud. While point clouds can be directly rendered and inspected, they do not hold any semantics. Typically, engineers manually obtain floor plans, structural models, or the whole BIM model, which is a very time-consuming task for large building projects. In this contribution, we present the design and concept of a PointCloud2BIM library [1]. It provides a set of algorithms for automated or user assisted detection of fundamental entities from scanned point cloud data sets, such as floors, rooms, walls, and openings, and identification of the mutual relationships between them. The entity detection is based on a reasonable degree of human interaction (i.e., expected wall thickness). The results reside in a platform-agnostic JSON database allowing future integration into any existing BIM software.


2011 ◽  
Author(s):  
David Doria

This document presents a GUI application to manually select corresponding points in two data sets. The data sets can each be either an image or a point cloud. If both data sets are images, the functionality is equivalent to Matlab’s ‘cpselect’ function. There are many uses of selecting correspondences. If both data sets are images, the correspondences can be used to compute the fundamental matrix, or to perform registration. If both data sets are point clouds, the correspondences can be used to compute a landmark transformation. If one data set is an image and the other is a point cloud, the camera matrix relating the two can be computed.


2021 ◽  
Author(s):  
Simone Müller ◽  
Dieter Kranzlmüller

Based on depth perception of individual stereo cameras, spatial structures can be derived as point clouds. The quality of such three-dimensional data is technically restricted by sensor limitations, latency of recording, and insufficient object reconstructions caused by surface illustration. Additionally external physical effects like lighting conditions, material properties, and reflections can lead to deviations between real and virtual object perception. Such physical influences can be seen in rendered point clouds as geometrical imaging errors on surfaces and edges. We propose the simultaneous use of multiple and dynamically arranged cameras. The increased information density leads to more details in surrounding detection and object illustration. During a pre-processing phase the collected data are merged and prepared. Subsequently, a logical analysis part examines and allocates the captured images to three-dimensional space. For this purpose, it is necessary to create a new metadata set consisting of image and localisation data. The post-processing reworks and matches the locally assigned images. As a result, the dynamic moving images become comparable so that a more accurate point cloud can be generated. For evaluation and better comparability we decided to use synthetically generated data sets. Our approach builds the foundation for dynamic and real-time based generation of digital twins with the aid of real sensor data.


2021 ◽  
Vol 87 (7) ◽  
pp. 479-484
Author(s):  
Yu Hou ◽  
Ruifeng Zhai ◽  
Xueyan Li ◽  
Junfeng Song ◽  
Xuehan Ma ◽  
...  

Three-dimensional reconstruction from a single image has excellent future prospects. The use of neural networks for three-dimensional reconstruction has achieved remarkable results. Most of the current point-cloud-based three-dimensional reconstruction networks are trained using nonreal data sets and do not have good generalizability. Based on the Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago ()data set of large-scale scenes, this article proposes a method for processing real data sets. The data set produced in this work can better train our network model and realize point cloud reconstruction based on a single picture of the real world. Finally, the constructed point cloud data correspond well to the corresponding three-dimensional shapes, and to a certain extent, the disadvantage of the uneven distribution of the point cloud data obtained by light detection and ranging scanning is overcome using the proposed method.


2019 ◽  
Vol 9 (10) ◽  
pp. 2130 ◽  
Author(s):  
Kun Zhang ◽  
Shiquan Qiao ◽  
Xiaohong Wang ◽  
Yongtao Yang ◽  
Yongqiang Zhang

With the development of 3D scanning technology, a huge volume of point cloud data has been collected at a lower cost. The huge data set is the main burden during the data processing of point clouds, so point cloud simplification is critical. The main aim of point cloud simplification is to reduce data volume while preserving the data features. Therefore, this paper provides a new method for point cloud simplification, named FPPS (feature-preserved point cloud simplification). In FPPS, point cloud simplification entropy is defined, which quantifies features hidden in point clouds. According to simplification entropy, the key points including the majority of the geometric features are selected. Then, based on the natural quadric shape, we introduce a point cloud matching model (PCMM), by which the simplification rules are set. Additionally, the similarity between PCMM and the neighbors of the key points is measured by the shape operator. This represents the criteria for the adaptive simplification parameters in FPPS. Finally, the experiment verifies the feasibility of FPPS and compares FPPS with other four-point cloud simplification algorithms. The results show that FPPS is superior to other simplification algorithms. In addition, FPPS can partially recognize noise.


2021 ◽  
Author(s):  
Dejan Vasić ◽  
Marina Davidović ◽  
Ivan Radosavljević ◽  
Đorđe Obradović

Abstract. Panoramic images captured using laser scanning technologies, which principally produce point clouds, are readily applicable in colorization of point cloud, detailed visual inspection, road defect detection, spatial entities extraction, diverse maps creation etc. This paper underlines the importance of images in modern surveying technologies and different GIS projects at the same time having regard to their anonymization in accordance with GDPR. Namely, it is a legislative requirement that faces of persons and license plates of vehicles in the collected data are blurred. The objective of this paper is to present a novel architecture of the solution for a particular object blurring. The methodology was tested on four data sets counting 5000, 10 000, 15 000 and 20 000 panoramic images respectively. Percentage of accuracy, i.e. successfully detected and blurred objects of interest, was higher than 97 % for each data set.


Author(s):  
S. N. Mohd Isa ◽  
S. A. Abdul Shukor ◽  
N. A. Rahim ◽  
I. Maarof ◽  
Z. R. Yahya ◽  
...  

Abstract. In this paper, pairwise coarse registration is presented using real world point cloud data obtained by terrestrial laser scanner and without information on reference marker on the scene. The challenge in the data is because of multi-scanning which caused large data size in millions of points due to limited range about the scene generated from side view. Furthermore, the data have a low percentage of overlapping between two scans, and the point cloud data were acquired from structures with geometrical symmetry which leads to minimal transformation during registration process. To process the data, 3D Harris keypoint is used and coarse registration is done by Iterative Closest Point (ICP). Different sampling methods were applied in order to evaluate processing time for further analysis on different voxel grid size. Then, Root Means Squared Error (RMSE) is used to determine the accuracy of the approach and to study its relation to relative orientation of scan by pairwise registration. The results show that the grid average downsampling method gives shorter processing time with reasonable RMSE in finding the exact scan pair. It can also be seen that grid step size is having an inverse relationship with downsampling points. This setting is used to test on smaller overlapping data set of other heritage building. Evaluation on relative orientation is studied from transformation parameter for both data set, where Data set I, which higher overlapping data gives better accuracy which may be due to the small distance between the two point clouds compared to Data set II.


Sign in / Sign up

Export Citation Format

Share Document