Geometric Feature Extraction of Road from UAV Based Point Cloud Data

Author(s):  
Mustafa Zeybek ◽  
Serkan Biçici
2011 ◽  
Vol 299-300 ◽  
pp. 1091-1094 ◽  
Author(s):  
Jiang Zhu ◽  
Yuichi Takekuma ◽  
Tomohisa Tanaka ◽  
Yoshio Saito

Currently, design and processing of complicated model are enabled by the progress of the CAD/CAM system. In shape measurement, high precision measurement is performed using CMM. In order to evaluate the machined part, the designed model made by CAD system the point cloud data provided by the measurement system are analyzed and compared. Usually, the designed CAD model and measured point cloud data are made in the different coordinate systems, it is necessary to register those models in the same coordinate system for evaluation. In this research, a 3D model registration method based on feature extraction and iterative closest point (ICP) algorithm is proposed. It could efficiently and accurately register two models in different coordinate systems, and effectively avoid the problem of localized solution.


2016 ◽  
Vol 31 (9) ◽  
pp. 889-896
Author(s):  
马鑫 MA Xin ◽  
魏仲慧 WEI Zhong-hui ◽  
何昕 HE Xin ◽  
于国栋 YU Guo-dong

2013 ◽  
Vol 2013 ◽  
pp. 1-19 ◽  
Author(s):  
Yi An ◽  
Zhuohan Li ◽  
Cheng Shao

Reliable feature extraction from 3D point cloud data is an important problem in many application domains, such as reverse engineering, object recognition, industrial inspection, and autonomous navigation. In this paper, a novel method is proposed for extracting the geometric features from 3D point cloud data based on discrete curves. We extract the discrete curves from 3D point cloud data and research the behaviors of chord lengths, angle variations, and principal curvatures at the geometric features in the discrete curves. Then, the corresponding similarity indicators are defined. Based on the similarity indicators, the geometric features can be extracted from the discrete curves, which are also the geometric features of 3D point cloud data. The threshold values of the similarity indicators are taken from[0,1], which characterize the relative relationship and make the threshold setting easier and more reasonable. The experimental results demonstrate that the proposed method is efficient and reliable.


Author(s):  
S. D. Jawak ◽  
S. N. Panditrao ◽  
A. J. Luis

This work uses the canopy height model (CHM) based workflow for individual tree crown delineation and 3D feature extraction approach (Overwatch Geospatial's proprietary algorithm) for building feature delineation from high-density light detection and ranging (LiDAR) point cloud data in an urban environment and evaluates its accuracy by using very high-resolution panchromatic (PAN) (spatial) and 8-band (multispectral) WorldView-2 (WV-2) imagery. LiDAR point cloud data over San Francisco, California, USA, recorded in June 2010, was used to detect tree and building features by classifying point elevation values. The workflow employed includes resampling of LiDAR point cloud to generate a raster surface or digital terrain model (DTM), generation of a hill-shade image and an intensity image, extraction of digital surface model, generation of bare earth digital elevation model (DEM) and extraction of tree and building features. First, the optical WV-2 data and the LiDAR intensity image were co-registered using ground control points (GCPs). The WV-2 rational polynomial coefficients model (RPC) was executed in ERDAS Leica Photogrammetry Suite (LPS) using supplementary *.RPB file. In the second stage, ortho-rectification was carried out using ERDAS LPS by incorporating well-distributed GCPs. The root mean square error (RMSE) for the WV-2 was estimated to be 0.25 m by using more than 10 well-distributed GCPs. In the second stage, we generated the bare earth DEM from LiDAR point cloud data. In most of the cases, bare earth DEM does not represent true ground elevation. Hence, the model was edited to get the most accurate DEM/ DTM possible and normalized the LiDAR point cloud data based on DTM in order to reduce the effect of undulating terrain. We normalized the vegetation point cloud values by subtracting the ground points (DEM) from the LiDAR point cloud. A normalized digital surface model (nDSM) or CHM was calculated from the LiDAR data by subtracting the DEM from the DSM. The CHM or the normalized DSM represents the absolute height of all aboveground urban features relative to the ground. After normalization, the elevation value of a point indicates the height from the ground to the point. The above-ground points were used for tree feature and building footprint extraction. In individual tree extraction, first and last return point clouds were used along with the bare earth and building footprint models discussed above. In this study, scene dependent extraction criteria were employed to improve the 3D feature extraction process. LiDAR-based refining/ filtering techniques used for bare earth layer extraction were crucial for improving the subsequent 3D features (tree and building) feature extraction. The PAN-sharpened WV-2 image (with 0.5 m spatial resolution) was used to assess the accuracy of LiDAR-based 3D feature extraction. Our analysis provided an accuracy of 98 % for tree feature extraction and 96 % for building feature extraction from LiDAR data. This study could extract total of 15143 tree features using CHM method, out of which total of 14841 were visually interpreted on PAN-sharpened WV-2 image data. The extracted tree features included both shadowed (total 13830) and non-shadowed (total 1011). We note that CHM method could overestimate total of 302 tree features, which were not observed on the WV-2 image. One of the potential sources for tree feature overestimation was observed in case of those tree features which were adjacent to buildings. In case of building feature extraction, the algorithm could extract total of 6117 building features which were interpreted on WV-2 image, even capturing buildings under the trees (total 605) and buildings under shadow (total 112). Overestimation of tree and building features was observed to be limiting factor in 3D feature extraction process. This is due to the incorrect filtering of point cloud in these areas. One of the potential sources of overestimation was the man-made structures, including skyscrapers and bridges, which were confounded and extracted as buildings. This can be attributed to low point density at building edges and on flat roofs or occlusions due to which LiDAR cannot give as much precise planimetric accuracy as photogrammetric techniques (in segmentation) and lack of optimum use of textural information as well as contextual information (especially at walls which are away from roof) in automatic extraction algorithm. In addition, there were no separate classes for bridges or the features lying inside the water and multiple water height levels were also not considered. Based on these inferences, we conclude that the LiDAR-based 3D feature extraction supplemented by high resolution satellite data is a potential application which can be used for understanding and characterization of urban setup.


Author(s):  
M. Weinmann ◽  
A. Schmidt ◽  
C. Mallet ◽  
S. Hinz ◽  
F. Rottensteiner ◽  
...  

The fully automated analysis of 3D point clouds is of great importance in photogrammetry, remote sensing and computer vision. For reliably extracting objects such as buildings, road inventory or vegetation, many approaches rely on the results of a point cloud classification, where each 3D point is assigned a respective semantic class label. Such an assignment, in turn, typically involves statistical methods for feature extraction and machine learning. Whereas the different components in the processing workflow have extensively, but separately been investigated in recent years, the respective connection by sharing the results of crucial tasks across all components has not yet been addressed. This connection not only encapsulates the interrelated issues of neighborhood selection and feature extraction, but also the issue of how to involve spatial context in the classification step. In this paper, we present a novel and generic approach for 3D scene analysis which relies on (<i>i</i>) individually optimized 3D neighborhoods for (<i>ii</i>) the extraction of distinctive geometric features and (<i>iii</i>) the contextual classification of point cloud data. For a labeled benchmark dataset, we demonstrate the beneficial impact of involving contextual information in the classification process and that using individual 3D neighborhoods of optimal size significantly increases the quality of the results for both pointwise and contextual classification.


2018 ◽  
Vol 2018 ◽  
pp. 1-9 ◽  
Author(s):  
Yongshan Liu ◽  
Dehan Kong ◽  
Dandan Zhao ◽  
Xiang Gong ◽  
Guichun Han

The existing registration algorithms suffer from low precision and slow speed when registering a large amount of point cloud data. In this paper, we propose a point cloud registration algorithm based on feature extraction and matching; the algorithm helps alleviate problems of precision and speed. In the rough registration stage, the algorithm extracts feature points based on the judgment of retention points and bumps, which improves the speed of feature point extraction. In the registration process, FPFH features and Hausdorff distance are used to search for corresponding point pairs, and the RANSAC algorithm is used to eliminate incorrect point pairs, thereby improving the accuracy of the corresponding relationship. In the precise registration phase, the algorithm uses an improved normal distribution transformation (INDT) algorithm. Experimental results show that given a large amount of point cloud data, this algorithm has advantages in both time and precision.


Sign in / Sign up

Export Citation Format

Share Document