scholarly journals Filtering of Point Clouds from Photogrammetric Surface Reconstruction

Author(s):  
K. Wenzel ◽  
M. Rothermel ◽  
D. Fritsch ◽  
N. Haala

The density and data volumes for recorded 3D surfaces increase steadily. In particular during photogrammetric surface reconstruction and laser scanning applications these volumes often exceed the limits of the available hardware and software. The large point clouds and meshes acquired during the projects contain billions of vertices and require scalable data handling frameworks for further processing. Beside the scalability to big data, these methods also should adapt to non-uniform data density and precision resulting from varying acquisition distances, as required for data from Photogrammetry and Laser Scanning. For this purpose, we present a framework called <i>Pine Tree</i>, which is based on an <i>out-of-core octree</i>. It enables fast local data queries, such as nearest neighbor queries for filtering, while dynamically storing and loading data from the hard disk. This way, large amounts of data can be processed on limited main memory. Within this paper, we describe the <i>Pine Tree</i> approach as well as its underlying methods. Furthermore, examples for a filtering task are shown, where overlapping point clouds are thinned out by preserving the locally densest point cloud only. By adding an optional redundancy constraint, point validation and outlier rejection can be applied.

Author(s):  
M. Rothermel ◽  
N. Haala ◽  
D. Fritsch

Due to good scalability, systems for image-based dense surface reconstruction often employ stereo or multi-baseline stereo methods. These types of algorithms represent the scene by a set of depth or disparity maps which eventually have to be fused to extract a consistent, non-redundant surface representation. Generally the single depth observations across the maps possess variances in quality. Within the fusion process not only preservation of precision and detail but also density and robustness with respect to outliers are desirable. Being prune to outliers, in this article we propose a local median-based algorithm for the fusion of depth maps eventually representing the scene as a set of oriented points. Paying respect to scalability, points induced by each of the available depth maps are streamed to cubic tiles which then can be filtered in parallel. Arguing that the triangulation uncertainty is larger in the direction of image rays we define these rays as the main filter direction. Within an additional strategy we define the surface normals as the principle direction for median filtering/integration. The presented approach is straight-forward to implement since employing standard oc- and kd-tree structures enhanced by nearest neighbor queries optimized for cylindrical neighborhoods. We show that the presented method in combination with the MVS (Rothermel et al., 2012) produces surfaces comparable to the results of the Middlebury MVS benchmark and favorably compares to an state-of-the-art algorithm employing the Fountain dataset (Strecha et al., 2008). Moreover, we demonstrate its capability of depth map fusion for city scale reconstructions derived from large frame airborne imagery.


Author(s):  
A. Walicka ◽  
N. Pfeifer ◽  
G. Jóźków ◽  
A. Borkowski

<p><strong>Abstract.</strong> Remote sensing techniques are an important tool in fluvial transport monitoring, since they allow for effective evaluation of the volume of transported material. Nevertheless, there is no methodology for automatic calculation of movement parameters of individual rocks. These parameters can be determined by point cloud registration. Hence, the goal of this study is to develop a robust algorithm for terrestrial laser scanning point cloud registration. The registration is based on Iterative Closest Point algorithm, which requires well established initial parameters of transformation. Thus, we propose to calculate the initial parameters based on key points representing the maximum of Gaussian curvature. For each key point the set of geometrical features is calculated. The key points are then matched between two point clouds as a nearest neighbor in feature domain. Different combinations of neighborhood sizes, feature subsets, metrics and number of nearest neighbors were tested to obtain the highest ratio between properly and improperly matched key points. Finally, RANSAC algorithm was used to calculate the initial transformation parameters between the point clouds and the ICP algorithm was used for calculation of final transformation parameters. The investigations carried out on sample point clouds representing rocks enabled the adjustment of parameters of the algorithm and showed that the Gaussian curvature can be used as a 3-dimentional key point detector for such objects. The proposed algorithm enabled to register point clouds with the mean distance between point clouds equal to 3&amp;thinsp;mm.</p>


Author(s):  
Pedro Jose Silva Leite ◽  
Joao Marcelo Xavier Natario Teixeira ◽  
Thiago Souto Maior Cordeiro de Farias ◽  
Veronica Teichrieb ◽  
Judith Kelner

2019 ◽  
Vol 11 (8) ◽  
pp. 947 ◽  
Author(s):  
Lei Fan ◽  
Peter M. Atkinson

Point clouds obtained from laser scanning techniques are now a standard type of spatial data for characterising terrain surfaces. Some have been shared as open data for free access. A problem with the use of these free point cloud data is that the data density may be more than necessary for a given application, leading to higher computational cost in subsequent data processing and visualisation. In such cases, to make the dense point clouds more manageable, their data density can be reduced. This research proposes a new coarse-to-fine sub-sampling method for reducing point cloud data density, which honours the local surface complexity of a terrain surface. The method proposed is tested using four point clouds representing terrain surfaces with distinct spatial characteristics. The effectiveness of the iterative coarse-to-fine method is evaluated and compared against several benchmarks in the form of typical sub-sampling methods available in open source software for point cloud processing.


Author(s):  
M. Rothermel ◽  
N. Haala ◽  
D. Fritsch

Due to good scalability, systems for image-based dense surface reconstruction often employ stereo or multi-baseline stereo methods. These types of algorithms represent the scene by a set of depth or disparity maps which eventually have to be fused to extract a consistent, non-redundant surface representation. Generally the single depth observations across the maps possess variances in quality. Within the fusion process not only preservation of precision and detail but also density and robustness with respect to outliers are desirable. Being prune to outliers, in this article we propose a local median-based algorithm for the fusion of depth maps eventually representing the scene as a set of oriented points. Paying respect to scalability, points induced by each of the available depth maps are streamed to cubic tiles which then can be filtered in parallel. Arguing that the triangulation uncertainty is larger in the direction of image rays we define these rays as the main filter direction. Within an additional strategy we define the surface normals as the principle direction for median filtering/integration. The presented approach is straight-forward to implement since employing standard oc- and kd-tree structures enhanced by nearest neighbor queries optimized for cylindrical neighborhoods. We show that the presented method in combination with the MVS (Rothermel et al., 2012) produces surfaces comparable to the results of the Middlebury MVS benchmark and favorably compares to an state-of-the-art algorithm employing the Fountain dataset (Strecha et al., 2008). Moreover, we demonstrate its capability of depth map fusion for city scale reconstructions derived from large frame airborne imagery.


Author(s):  
A. C. Carrilho ◽  
R. C. dos Santos ◽  
G. G. Pessoa ◽  
M. Galo

Abstract. Nowadays some aerial surveying projects (both manned or not) might integrate imagery and ranging sensor technology, thus allowing both the reconstruction of the surface from airborne laser scanning (ALS) and from the Photogrammetric pipeline through digital image matching (DIM). DIM algorithms have been continuously improved, therefore comparisons to other techniques such as ALS must be conducted. Despite the scientific community efforts, there are few evaluations regarding the extraction of building roofs using point clouds derived from both ALS and imagery. In this sense, this study provides a brief comparison between ALS and DIM point clouds and the accuracy that can be achieved in surface reconstruction and building roofs extraction. The experiments indicated that in some situations such as the building roof shape extraction, the accuracy is similar between these techniques, however, this is not valid for vertical accuracy, where larger differences were observed.


2010 ◽  
Vol 33 (8) ◽  
pp. 1396-1404 ◽  
Author(s):  
Liang ZHAO ◽  
Luo CHEN ◽  
Ning JING ◽  
Wei LIAO

2021 ◽  
Vol 13 (11) ◽  
pp. 2135
Author(s):  
Jesús Balado ◽  
Pedro Arias ◽  
Henrique Lorenzo ◽  
Adrián Meijide-Rodríguez

Mobile Laser Scanning (MLS) systems have proven their usefulness in the rapid and accurate acquisition of the urban environment. From the generated point clouds, street furniture can be extracted and classified without manual intervention. However, this process of acquisition and classification is not error-free, caused mainly by disturbances. This paper analyses the effect of three disturbances (point density variation, ambient noise, and occlusions) on the classification of urban objects in point clouds. From point clouds acquired in real case studies, synthetic disturbances are generated and added. The point density reduction is generated by downsampling in a voxel-wise distribution. The ambient noise is generated as random points within the bounding box of the object, and the occlusion is generated by eliminating points contained in a sphere. Samples with disturbances are classified by a pre-trained Convolutional Neural Network (CNN). The results showed different behaviours for each disturbance: density reduction affected objects depending on the object shape and dimensions, ambient noise depending on the volume of the object, while occlusions depended on their size and location. Finally, the CNN was re-trained with a percentage of synthetic samples with disturbances. An improvement in the performance of 10–40% was reported except for occlusions with a radius larger than 1 m.


2021 ◽  
Vol 13 (11) ◽  
pp. 2195
Author(s):  
Shiming Li ◽  
Xuming Ge ◽  
Shengfu Li ◽  
Bo Xu ◽  
Zhendong Wang

Today, mobile laser scanning and oblique photogrammetry are two standard urban remote sensing acquisition methods, and the cross-source point-cloud data obtained using these methods have significant differences and complementarity. Accurate co-registration can make up for the limitations of a single data source, but many existing registration methods face critical challenges. Therefore, in this paper, we propose a systematic incremental registration method that can successfully register MLS and photogrammetric point clouds in the presence of a large number of missing data, large variations in point density, and scale differences. The robustness of this method is due to its elimination of noise in the extracted linear features and its 2D incremental registration strategy. There are three main contributions of our work: (1) the development of an end-to-end automatic cross-source point-cloud registration method; (2) a way to effectively extract the linear feature and restore the scale; and (3) an incremental registration strategy that simplifies the complex registration process. The experimental results show that this method can successfully achieve cross-source data registration, while other methods have difficulty obtaining satisfactory registration results efficiently. Moreover, this method can be extended to more point-cloud sources.


2021 ◽  
Vol 13 (3) ◽  
pp. 507
Author(s):  
Tasiyiwa Priscilla Muumbe ◽  
Jussi Baade ◽  
Jenia Singh ◽  
Christiane Schmullius ◽  
Christian Thau

Savannas are heterogeneous ecosystems, composed of varied spatial combinations and proportions of woody and herbaceous vegetation. Most field-based inventory and remote sensing methods fail to account for the lower stratum vegetation (i.e., shrubs and grasses), and are thus underrepresenting the carbon storage potential of savanna ecosystems. For detailed analyses at the local scale, Terrestrial Laser Scanning (TLS) has proven to be a promising remote sensing technology over the past decade. Accordingly, several review articles already exist on the use of TLS for characterizing 3D vegetation structure. However, a gap exists on the spatial concentrations of TLS studies according to biome for accurate vegetation structure estimation. A comprehensive review was conducted through a meta-analysis of 113 relevant research articles using 18 attributes. The review covered a range of aspects, including the global distribution of TLS studies, parameters retrieved from TLS point clouds and retrieval methods. The review also examined the relationship between the TLS retrieval method and the overall accuracy in parameter extraction. To date, TLS has mainly been used to characterize vegetation in temperate, boreal/taiga and tropical forests, with only little emphasis on savannas. TLS studies in the savanna focused on the extraction of very few vegetation parameters (e.g., DBH and height) and did not consider the shrub contribution to the overall Above Ground Biomass (AGB). Future work should therefore focus on developing new and adjusting existing algorithms for vegetation parameter extraction in the savanna biome, improving predictive AGB models through 3D reconstructions of savanna trees and shrubs as well as quantifying AGB change through the application of multi-temporal TLS. The integration of data from various sources and platforms e.g., TLS with airborne LiDAR is recommended for improved vegetation parameter extraction (including AGB) at larger spatial scales. The review highlights the huge potential of TLS for accurate savanna vegetation extraction by discussing TLS opportunities, challenges and potential future research in the savanna biome.


Sign in / Sign up

Export Citation Format

Share Document