scholarly journals Anisotropic neighborhood searching for point cloud with sharp feature

2020 ◽  
pp. 002029402096424
Author(s):  
Xiaocui Yuan ◽  
Baoling Liu ◽  
Yongli Ma

The k-nearest neighborhoods (kNN) of feature points of complex surface model are usually isotropic, which may lead to sharp feature blurring during data processing, such as noise removal and surface reconstruction. To address this issue, a new method was proposed to search the anisotropic neighborhood for point cloud with sharp feature. Constructing KD tree and calculating kNN for point cloud data, the principal component analysis method was employed to detect feature points and estimate normal vectors of points. Moreover, improved bilateral normal filter was used to refine the normal vector of feature point to obtain more accurate normal vector. The isotropic kNN of feature point were segmented by mapping the kNN into Gaussian sphere to form different data-clusters, with the hierarchical clustering method used to separate the data in Gaussian sphere into different clusters. The optimal anisotropic neighborhoods of feature point corresponded to the cluster data with the maximum point number. To validate the effectiveness of our method, the anisotropic neighbors are applied to point data processing, such as normal estimation and point cloud denoising. Experimental results demonstrate that the proposed algorithm in the work is more time-consuming, but provides a more accurate result for point cloud processing by comparing with other kNN searching methods. The anisotropic neighborhood searched by our method can be used to normal estimation, denoising, surface fitting and reconstruction et al. for point cloud with sharp feature, and our method can provide more accurate result comparing with isotropic neighborhood.

Symmetry ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 399
Author(s):  
Miao Gong ◽  
Zhijiang Zhang ◽  
Dan Zeng

High-precision and high-density three-dimensional point cloud models usually contain redundant data, which implies extra time and hardware costs in the subsequent data processing stage. To analyze and extract data more effectively, the point cloud must be simplified before data processing. Given that point cloud simplification must be sensitive to features to ensure that more valid information can be saved, in this paper, a new simplification algorithm for scattered point clouds with feature preservation, which can reduce the amount of data while retaining the features of data, is proposed. First, the Delaunay neighborhood of the point cloud is constructed, and then the edge points of the point cloud are extracted by the edge distribution characteristics of the point cloud. Second, the moving least-square method is used to obtain the normal vector of the point cloud and the valley ridge points of the model. Then, potential feature points are identified further and retained on the basis of the discrete gradient idea. Finally, non-feature points are extracted. Experimental results show that our method can be applied to models with different curvatures and effectively avoid the hole phenomenon in the simplification process. To further improve the robustness and anti-noise ability of the method, the neighborhood of the point cloud can be extended to multiple levels, and a balance between simplification speed and accuracy needs to be found.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3703
Author(s):  
Dongyang Cheng ◽  
Dangjun Zhao ◽  
Junchao Zhang ◽  
Caisheng Wei ◽  
Di Tian

Due to the complexity of surrounding environments, lidar point cloud data (PCD) are often degraded by plane noise. In order to eliminate noise, this paper proposes a filtering scheme based on the grid principal component analysis (PCA) technique and the ground splicing method. The 3D PCD is first projected onto a desired 2D plane, within which the ground and wall data are well separated from the PCD via a prescribed index based on the statistics of points in all 2D mesh grids. Then, a KD-tree is constructed for the ground data, and rough segmentation in an unsupervised method is conducted to obtain the true ground data by using the normal vector as a distinctive feature. To improve the performance of noise removal, we propose an elaborate K nearest neighbor (KNN)-based segmentation method via an optimization strategy. Finally, the denoised data of the wall and ground are spliced for further 3D reconstruction. The experimental results show that the proposed method is efficient at noise removal and is superior to several traditional methods in terms of both denoising performance and run speed.


2011 ◽  
Vol 464 ◽  
pp. 229-232
Author(s):  
Jin Hu Sun ◽  
Lai Shui Zhou ◽  
Bo Xiang ◽  
Lu Ling An

In the area of reverse engineering, the normal of point cloud is the basis of data processing such as smoothing, simplifying and fusing. The consistent adjustment of normal orientation (or called normal adjustment in short) is an essential step for normal estimation. In this paper, a new algorithm for normal adjustment is proposed. It mainly researches on how to get the right adjustment effect and how to make it faster. To adjust the normal rightly, the angle between the original normals of two points is used as the basis of spread order. A suitable exploration mode, in which only the border of adjusted points and unadjusted is explored, is chosen to make the adjustment process efficient. To get more efficiency, the spread based on threshold value is used to adjust several points for one time. The spread based on minimum value is also used. The latter mode is not as efficient as the former one, but it ensures that the spread will not suspend. Experiments result indicates that the normal adjustment algorithm is efficient.


2018 ◽  
Vol 10 (9) ◽  
pp. 168781401879503
Author(s):  
Haihua Cui ◽  
Wenhe Liao ◽  
Xiaosheng Cheng ◽  
Ning Dai ◽  
Changye Guo

Flexible and robust point cloud matching is important for three-dimensional surface measurement. This article proposes a new matching method based on three-dimensional image feature points. First, an intrinsic shape signature algorithm is used to detect the key shape feature points, using a weighted three-dimensional occupational histogram of the data points within the angular space, which is a view-independent representation of the three-dimensional shape. Then, the point feature histogram is used to represent the underlying surface model properties at a point whose computation is based on the combination of certain geometrical relations between the point’s nearest k-neighbors. The two-view point clouds are robustly matched using the proposed double neighborhood constraint of minimizing the sum of the Euclidean distances between the local neighbors of the point and feature point. The proposed optimization method is immune to noise, reduces the search range for matching points, and improves the correct feature point matching rate for a weak surface texture. The matching accuracy and stability of the proposed method are verified using experiments. This method can be used for a flat surface with weak features and in other applications. The method has a larger application range than the traditional methods.


2012 ◽  
Vol 479-481 ◽  
pp. 2152-2156
Author(s):  
Can Zhao ◽  
Yun Ling Shi ◽  
Jun Ting Cheng

For the mass point cloud, which is generated by large work piece for free surface, the point cloud noise removal is the most important step, after denoising point cloud which quality will directly influence the follow-up point normal vector estimation and curvature estimation. Therefore, this paper presents a simple and efficient algorithm for near-point denoising. Firstly, it using the bounding box for messy point cloud data differentiate space topology structure, then, traverse all points, for each point looking for its K neighborhood, and fitting quadric surface using the K neighbors; finally, using Z value method for calculation of the distance that point to the secondary surface distance. Setting threshold, and if the distance beyond the threshold, then the point that noise points and delete. Experiments show that this algorithm compared with the traditional algorithm, not only improve efficiency, and can be a very good retain the original model data, but also for the follow-up process provides high quality raw data, there is a wide useful in three dimensions scanning, projective measurement reverse design and other fields.


2011 ◽  
Vol 63-64 ◽  
pp. 470-473
Author(s):  
Hong Fei Zhang ◽  
Xiao Jun Cheng ◽  
Yan Ping Liu

We introduce an improved compressing algorithm with features reserved for point cloud. Divided-box method is employed for compressing algorithm to improve the neighbor field searching efficiency, with which normal vector and curvature of points are calculated, and feature points are reserved according to reducing rule, finally, based on the octree theory, the smallest grid is refined until which reaches the minimum requirements, then reserve the most representative point of the smallest grid, remove the other points, and data reduction is done. Experimental results show that the compression algorithm conserved the features of point cloud with high efficiency.


Author(s):  
Y. Feng ◽  
A. Schlichting ◽  
C. Brenner

Accurate positioning of vehicles plays an important role in autonomous driving. In our previous research on landmark-based positioning, poles were extracted both from reference data and online sensor data, which were then matched to improve the positioning accuracy of the vehicles. However, there are environments which contain only a limited number of poles. 3D feature points are one of the proper alternatives to be used as landmarks. They can be assumed to be present in the environment, independent of certain object classes. To match the LiDAR data online to another LiDAR derived reference dataset, the extraction of 3D feature points is an essential step. In this paper, we address the problem of 3D feature point extraction from LiDAR datasets. Instead of hand-crafting a 3D feature point extractor, we propose to train it using a neural network. In this approach, a set of candidates for the 3D feature points is firstly detected by the Shi-Tomasi corner detector on the range images of the LiDAR point cloud. Using a back propagation algorithm for the training, the artificial neural network is capable of predicting feature points from these corner candidates. The training considers not only the shape of each corner candidate on 2D range images, but also their 3D features such as the curvature value and surface normal value in z axis, which are calculated directly based on the LiDAR point cloud. Subsequently the extracted feature points on the 2D range images are retrieved in the 3D scene. The 3D feature points extracted by this approach are generally distinctive in the 3D space. Our test shows that the proposed method is capable of providing a sufficient number of repeatable 3D feature points for the matching task. The feature points extracted by this approach have great potential to be used as landmarks for a better localization of vehicles.


2016 ◽  
Vol 24 (10) ◽  
pp. 2581-2588 ◽  
Author(s):  
袁小翠 YUAN Xiao-cui ◽  
吴禄慎 WU Lu-shen ◽  
陈华伟 CHEN Hua-wei

Author(s):  
Y. Feng ◽  
A. Schlichting ◽  
C. Brenner

Accurate positioning of vehicles plays an important role in autonomous driving. In our previous research on landmark-based positioning, poles were extracted both from reference data and online sensor data, which were then matched to improve the positioning accuracy of the vehicles. However, there are environments which contain only a limited number of poles. 3D feature points are one of the proper alternatives to be used as landmarks. They can be assumed to be present in the environment, independent of certain object classes. To match the LiDAR data online to another LiDAR derived reference dataset, the extraction of 3D feature points is an essential step. In this paper, we address the problem of 3D feature point extraction from LiDAR datasets. Instead of hand-crafting a 3D feature point extractor, we propose to train it using a neural network. In this approach, a set of candidates for the 3D feature points is firstly detected by the Shi-Tomasi corner detector on the range images of the LiDAR point cloud. Using a back propagation algorithm for the training, the artificial neural network is capable of predicting feature points from these corner candidates. The training considers not only the shape of each corner candidate on 2D range images, but also their 3D features such as the curvature value and surface normal value in z axis, which are calculated directly based on the LiDAR point cloud. Subsequently the extracted feature points on the 2D range images are retrieved in the 3D scene. The 3D feature points extracted by this approach are generally distinctive in the 3D space. Our test shows that the proposed method is capable of providing a sufficient number of repeatable 3D feature points for the matching task. The feature points extracted by this approach have great potential to be used as landmarks for a better localization of vehicles.


Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Sign in / Sign up

Export Citation Format

Share Document