scholarly journals A Saliency-Based Sparse Representation Method for Point Cloud Simplification

Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4279
Author(s):  
Esmeide Leal ◽  
German Sanchez-Torres ◽  
John W. Branch-Bedoya ◽  
Francisco Abad ◽  
Nallig Leal

High-resolution 3D scanning devices produce high-density point clouds, which require a large capacity of storage and time-consuming processing algorithms. In order to reduce both needs, it is common to apply surface simplification algorithms as a preprocessing stage. The goal of point cloud simplification algorithms is to reduce the volume of data while preserving the most relevant features of the original point cloud. In this paper, we present a new point cloud feature-preserving simplification algorithm. We use a global approach to detect saliencies on a given point cloud. Our method estimates a feature vector for each point in the cloud. The components of the feature vector are the normal vector coordinates, the point coordinates, and the surface curvature at each point. Feature vectors are used as basis signals to carry out a dictionary learning process, producing a trained dictionary. We perform the corresponding sparse coding process to produce a sparse matrix. To detect the saliencies, the proposed method uses two measures, the first of which takes into account the quantity of nonzero elements in each column vector of the sparse matrix and the second the reconstruction error of each signal. These measures are then combined to produce the final saliency value for each point in the cloud. Next, we proceed with the simplification of the point cloud, guided by the detected saliency and using the saliency values of each point as a dynamic clusterization radius. We validate the proposed method by comparing it with a set of state-of-the-art methods, demonstrating the effectiveness of the simplification method.

Symmetry ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 399
Author(s):  
Miao Gong ◽  
Zhijiang Zhang ◽  
Dan Zeng

High-precision and high-density three-dimensional point cloud models usually contain redundant data, which implies extra time and hardware costs in the subsequent data processing stage. To analyze and extract data more effectively, the point cloud must be simplified before data processing. Given that point cloud simplification must be sensitive to features to ensure that more valid information can be saved, in this paper, a new simplification algorithm for scattered point clouds with feature preservation, which can reduce the amount of data while retaining the features of data, is proposed. First, the Delaunay neighborhood of the point cloud is constructed, and then the edge points of the point cloud are extracted by the edge distribution characteristics of the point cloud. Second, the moving least-square method is used to obtain the normal vector of the point cloud and the valley ridge points of the model. Then, potential feature points are identified further and retained on the basis of the discrete gradient idea. Finally, non-feature points are extracted. Experimental results show that our method can be applied to models with different curvatures and effectively avoid the hole phenomenon in the simplification process. To further improve the robustness and anti-noise ability of the method, the neighborhood of the point cloud can be extended to multiple levels, and a balance between simplification speed and accuracy needs to be found.


2021 ◽  
Vol 13 (17) ◽  
pp. 3427
Author(s):  
Chunjiao Zhang ◽  
Shenghua Xu ◽  
Tao Jiang ◽  
Jiping Liu ◽  
Zhengjun Liu ◽  
...  

LiDAR point clouds are rich in spatial information and can effectively express the size, shape, position, and direction of objects; thus, they have the advantage of high spatial utilization. The point cloud focuses on describing the shape of the external surface of the object itself and will not store useless redundant information to describe the occupation. Therefore, point clouds have become the research focus of 3D data models and are widely used in large-scale scene reconstruction, virtual reality, digital elevation model production, and other fields. Since point clouds have various characteristics, such as disorder, density inconsistency, unstructuredness, and incomplete information, point cloud classification is still complex and challenging. To realize the semantic classification of LiDAR point clouds in complex scenarios, this paper proposes the integration of normal vector features into an atrous convolution residual network. Based on the RandLA-Net network structure, the proposed network integrates the atrous convolution into the residual module to extract global and local features of the point clouds. The atrous convolution can learn more valuable point cloud feature information by expanding the receptive field. Then, the point cloud normal vector is embedded in the local feature aggregation module of the RandLA-Net network to extract local semantic aggregation features. The improved local feature aggregation module can merge the deep features of the point cloud and mine the fine-grained information of the point cloud to improve the model’s segmentation ability in complex scenes. Finally, to resolve the imbalance of the distribution of the various categories of point clouds, the original loss function is optimized by adopting a reweighted method to prevent overfitting so that the network can focus on small target categories in the training process to effectively improve the classification performance. Through the experimental analysis of a Vaihingen (Germany) urban 3D semantic dataset from the ISPRS website, it is verified that the proposed algorithm has a strong generalization ability. The overall accuracy (OA) of the proposed algorithm on the Vaihingen urban 3D semantic dataset reached 97.9%, and the average reached 96.1%. Experiments show that the proposed algorithm fully exploits the semantic features of point clouds and effectively improves the accuracy of point cloud classification.


2018 ◽  
Vol 8 (11) ◽  
pp. 2318 ◽  
Author(s):  
Qingyuan Zhu ◽  
Jinjin Wu ◽  
Huosheng Hu ◽  
Chunsheng Xiao ◽  
Wei Chen

When 3D laser scanning (LIDAR) is used for navigation of autonomous vehicles operated on unstructured terrain, it is necessary to register the acquired point cloud and accurately perform point cloud reconstruction of the terrain in time. This paper proposes a novel registration method to deal with uneven-density and high-noise of unstructured terrain point clouds. It has two steps of operation, namely initial registration and accurate registration. Multisensor data is firstly used for initial registration. An improved Iterative Closest Point (ICP) algorithm is then deployed for accurate registration. This algorithm extracts key points and builds feature descriptors based on the neighborhood normal vector, point cloud density and curvature. An adaptive threshold is introduced to accelerate iterative convergence. Experimental results are given to show that our two-step registration method can effectively solve the uneven-density and high-noise problem in registration of unstructured terrain point clouds, thereby improving the accuracy of terrain point cloud reconstruction.


2019 ◽  
Vol 9 (10) ◽  
pp. 2130 ◽  
Author(s):  
Kun Zhang ◽  
Shiquan Qiao ◽  
Xiaohong Wang ◽  
Yongtao Yang ◽  
Yongqiang Zhang

With the development of 3D scanning technology, a huge volume of point cloud data has been collected at a lower cost. The huge data set is the main burden during the data processing of point clouds, so point cloud simplification is critical. The main aim of point cloud simplification is to reduce data volume while preserving the data features. Therefore, this paper provides a new method for point cloud simplification, named FPPS (feature-preserved point cloud simplification). In FPPS, point cloud simplification entropy is defined, which quantifies features hidden in point clouds. According to simplification entropy, the key points including the majority of the geometric features are selected. Then, based on the natural quadric shape, we introduce a point cloud matching model (PCMM), by which the simplification rules are set. Additionally, the similarity between PCMM and the neighbors of the key points is measured by the shape operator. This represents the criteria for the adaptive simplification parameters in FPPS. Finally, the experiment verifies the feasibility of FPPS and compares FPPS with other four-point cloud simplification algorithms. The results show that FPPS is superior to other simplification algorithms. In addition, FPPS can partially recognize noise.


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Yang Yang ◽  
Ming Li ◽  
Xie Ma

To further improve the performance of the point cloud simplification algorithm and reserve the feature information of parts point cloud, a new method based on modified fuzzy c-means (MFCM) clustering algorithm with feature information reserved is proposed. Firstly, the normal vector, angle entropy, curvature, and density information of point cloud are calculated by combining principal component analysis (PCA) and k-nearest neighbors (k-NN) algorithm, respectively; Secondly, gravitational search algorithm (GSA) is introduced to optimize the initial cluster center of fuzzy c-means (FCM) clustering algorithm. Thirdly, the point cloud data combined coordinates with its feature information are divided by the MFCM algorithm. Finally, the point cloud is simplified according to point cloud feature information and simplified parameters. The point cloud test data are simplified using the new algorithm and traditional algorithms; then, the results are compared and discussed. The results show that the new proposed algorithm can not only effectively improve the precision of point cloud simplification but also reserve the accuracy of part features.


Author(s):  
M. R. Hess ◽  
V. Petrovic ◽  
F. Kuester

Digital documentation of cultural heritage structures is increasingly more common through the application of different imaging techniques. Many works have focused on the application of laser scanning and photogrammetry techniques for the acquisition of threedimensional (3D) geometry detailing cultural heritage sites and structures. With an abundance of these 3D data assets, there must be a digital environment where these data can be visualized and analyzed. Presented here is a feedback driven visualization framework that seamlessly enables interactive exploration and manipulation of massive point cloud data. The focus of this work is on the classification of different building materials with the goal of building more accurate as-built information models of historical structures. User defined functions have been tested within the interactive point cloud visualization framework to evaluate automated and semi-automated classification of 3D point data. These functions include decisions based on observed color, laser intensity, normal vector or local surface geometry. Multiple case studies are presented here to demonstrate the flexibility and utility of the presented point cloud visualization framework to achieve classification objectives.


Author(s):  
A. A. Sidiropoulos ◽  
K. N. Lakakis ◽  
V. K. Mouza

The technology of 3D laser scanning is considered as one of the most common methods for heritage documentation. The point clouds that are being produced provide information of high detail, both geometric and thematic. There are various studies that examine techniques of the best exploitation of this information. In this study, an algorithm of pathology localization, such as cracks and fissures, on complex building surfaces is being tested. The algorithm makes use of the points’ position in the point cloud and tries to distinguish them in two groups-patterns; pathology and non-pathology. The extraction of the geometric information that is being used for recognizing the pattern of the points is being accomplished via Principal Component Analysis (PCA) in user-specified neighborhoods in the whole point cloud. The implementation of PCA leads to the definition of the normal vector at each point of the cloud. Two tests that operate separately examine both local and global geometric criteria among the points and conclude which of them should be categorized as pathology. The proposed algorithm was tested on parts of the Gazi Evrenos Baths masonry, which are located at the city of Giannitsa at Northern Greece.


2017 ◽  
Vol 11 (4) ◽  
pp. 657-665 ◽  
Author(s):  
Ryuji Miyazaki ◽  
Makoto Yamamoto ◽  
Koichi Harada ◽  
◽  
◽  
...  

We propose a line-based region growing method for extracting planar regions with precise boundaries from a point cloud with an anisotropic distribution. Planar structure extraction from point clouds is an important process in many applications, such as maintenance of infrastructure components including roads and curbstones, because most artificial structures consist of planar surfaces. A mobile mapping system (MMS) is able to obtain a large number of points while traveling at a standard speed. However, if a high-end laser scanning system is equipped, the point cloud has an anisotropic distribution. In traditional point-based methods, this causes problems when calculating geometric information using neighboring points. In the proposed method, the precise boundary of a planar structure is maintained by appropriately creating line segments from an input point cloud. Furthermore, a normal vector at a line segment is precisely estimated for the region growing process. An experiment using the point cloud from an MMS simulation indicates that the proposed method extracts planar regions accurately. Additionally, we apply the proposed method to several real point clouds and evaluate its effectiveness via visual inspection.


Author(s):  
Wenlei Xiao ◽  
Guiyu Liu ◽  
Gang Zhao

Aero-engine is an essential component of the aircraft. Due to the high cost of raw materials and precise structure, the maintenance cost of aero-engine is great. By repairing worn blades rather than replacing them with new ones, the aero-engine maintenance cost can be reduced effectively. For repairing worn blades, existing methods mainly generate tool path based on the reconstructed surface with the aid of CAM software. In this paper, an effective tool path generation method for repairing blades after additive manufacturing process is presented, which overcomes the low efficiency and complicated process weakness of existing methods. The tool path is generated directly with point clouds without surface fitting. By splitting point cloud and analyzing geometric parameters of points, machining areas could be recognized from the entire blade model. The cutter location point is generated by extending on the normal vector direction of the corresponding point. The five-axis tool path could be obtained by connecting cutter location points in turns. Tool path optimization is further studied after the generation process. These algorithms eliminate the time consumption caused by surface fitting operations, and could generate five-axis tool paths for repairing aero-engine blades efficiently.


Optik ◽  
2015 ◽  
Vol 126 (19) ◽  
pp. 2157-2162 ◽  
Author(s):  
Huiyan Han ◽  
Xie Han ◽  
Fusheng Sun ◽  
Chunyan Huang

Sign in / Sign up

Export Citation Format

Share Document