scholarly journals Automatic Building Outline Extraction from ALS Point Clouds by Ordered Points Aided Hough Transform

2019 ◽  
Vol 11 (14) ◽  
pp. 1727 ◽  
Author(s):  
Elyta Widyaningrum ◽  
Ben Gorte ◽  
Roderik Lindenbergh

Many urban applications require building polygons as input. However, manual extraction from point cloud data is time- and labor-intensive. Hough transform is a well-known procedure to extract line features. Unfortunately, current Hough-based approaches lack flexibility to effectively extract outlines from arbitrary buildings. We found that available point order information is actually never used. Using ordered building edge points allows us to present a novel ordered points–aided Hough Transform (OHT) for extracting high quality building outlines from an airborne LiDAR point cloud. First, a Hough accumulator matrix is constructed based on a voting scheme in parametric line space (θ, r). The variance of angles in each column is used to determine dominant building directions. We propose a hierarchical filtering and clustering approach to obtain accurate line based on detected hotspots and ordered points. An Ordered Point List matrix consisting of ordered building edge points enables the detection of line segments of arbitrary direction, resulting in high-quality building roof polygons. We tested our method on three different datasets of different characteristics: one new dataset in Makassar, Indonesia, and two benchmark datasets in Vaihingen, Germany. To the best of our knowledge, our algorithm is the first Hough method that is highly adaptable since it works for buildings with edges of different lengths and arbitrary relative orientations. The results prove that our method delivers high completeness (between 90.1% and 96.4%) and correctness percentages (all over 96%). The positional accuracy of the building corners is between 0.2–0.57 m RMSE. The quality rate (89.6%) for the Vaihingen-B benchmark outperforms all existing state of the art methods. Other solutions for the challenging Vaihingen-A dataset are not yet available, while we achieve a quality score of 93.2%. Results with arbitrary directions are demonstrated on the complex buildings around the EYE museum in Amsterdam.

2021 ◽  
Vol 13 (20) ◽  
pp. 4031
Author(s):  
Ine Rosier ◽  
Jan Diels ◽  
Ben Somers ◽  
Jos Van Orshoven

Rural European landscapes are characterized by a variety of vegetated landscape elements. Although it is often not their main function, they have the potential to affect river discharge and the frequency, extent, depth and duration of floods downstream by creating both hydrological discontinuities and connections across the landscape. Information about the extent to which individual landscape elements and their spatial location affect peak river discharge and flood frequency and severity in agricultural catchments under specific meteorological conditions is limited. This knowledge gap can partly be explained by the lack of exhaustive inventories of the presence, geometry, and hydrological traits of vegetated landscape elements (vLEs), which in turn is due to the lack of appropriate techniques and source data to produce such inventories and keep them up to date. In this paper, a multi-step methodology is proposed to delineate and classify vLEs based on LiDAR point cloud data in three study areas in Flanders, Belgium. We classified the LiDAR point cloud data into the classes ‘vegetated landscape element point’ and ‘other’ using a Random Forest model with an accuracy classification score ranging between 0.92 and 0.97. The landscape element objects were further classified into the classes ‘tree object’ and ‘shrub object’ using a Logistic Regression model with an area-based accuracy ranging between 0.34 and 0.95.


Author(s):  
Y. Zhang ◽  
G. Q. Zhou ◽  
S. H. Tang ◽  
P. W. Xing ◽  
C. C. Huang

Abstract. Due to the semi-random characteristics of ground points collected by airborne LIDAR system, it is difficult to control the laser pin points to fall on the control points with known coordinates in actual measurement, so the accuracy can not be evaluated by directly comparing coordinate data. In this paper, based on the target plate designed by air-to-ground, the fuzzy c-means clustering analysis algorithm is proposed to extract the point cloud data on the target according to the different echo intensity data of laser on different ground objects. The center point coordinates of the target were fitted by the point cloud data on the target using the method of circumscribed circle of edge points, so as to realize the plane precision and elevation accuracy of airborne LIDAR system of evaluation. The results show that the model fitting method can quickly and effectively evaluate the accuracy of airborne LIDAR, and the method is simple and feasible.


Drones ◽  
2019 ◽  
Vol 3 (1) ◽  
pp. 29 ◽  
Author(s):  
Andrew Marx ◽  
Yu-Hsi Chou ◽  
Kevin Mercy ◽  
Richard Windisch

The availability and precision of unmanned aerial systems (UAS) permit the repeated collection of very-high quality three-dimensional (3D) data to monitor high-interest areas, such as dams, urban areas, or erosion-prone coastlines. However, challenges exist in the temporal analysis of this data, specifically in conducting change-detection analysis on the high-quality point cloud data. These files are very large in size and contain points in varying locations that do not align between scenes. These large file sizes also limit the use of this data for individuals with low computational resources, such as first responders or forward-deployed soldiers. In response, this manuscript presents an approach that aggregates data spatially into voxels to provide the user with a lightweight, web-based exploitation system coupled with a flexible backend database. The system creates a robust set of tools to analyze large temporal stacks of 3D data and reduces data size by 78%, all while being able to query the original point cloud data. This approach offers a solution for organizations analyzing high-resolution, temporal point-clouds, as well as a possible solution for operations in areas with poor computational and connectivity resources requiring high-quality, 3D data for decision support and planning.


2020 ◽  
Vol 12 (24) ◽  
pp. 4057
Author(s):  
Haoyi Xiu ◽  
Takayuki Shinohara ◽  
Masashi Matsuoka ◽  
Munenari Inoguchi ◽  
Ken Kawabe ◽  
...  

Collapsed buildings should be detected with the highest priority during earthquake emergency response, due to the associated fatality rates. Although deep learning-based damage detection using vertical aerial images can achieve high performance, as depth information cannot be obtained, it is difficult to detect collapsed buildings when their roofs are not heavily damaged. Airborne LiDAR can efficiently obtain the 3D geometries of buildings (in the form of point clouds) and thus has greater potential to detect various collapsed buildings. However, there have been few previous studies on deep learning-based damage detection using point cloud data, due to a lack of large-scale datasets. Therefore, in this paper, we aim to develop a dataset tailored to point cloud-based building damage detection, in order to investigate the potential of point cloud data in collapsed building detection. Two types of building data are created: building roof and building patch, which contains the building and its surroundings. Comprehensive experiments are conducted under various data availability scenarios (pre–post-building patch, post-building roof, and post-building patch) with varying reference data. The pre–post scenario tries to detect damage using pre-event and post-event data, whereas post-building patch and roof only use post-event data. Damage detection is implemented using both basic and modern 3D point cloud-based deep learning algorithms. To adapt a single-input network, which can only accept one building’s data for a prediction, to the pre–post (double-input) scenario, a general extension framework is proposed. Moreover, a simple visual explanation method is proposed, in order to conduct sensitivity analyses for validating the reliability of model decisions under the post-only scenario. Finally, the generalization ability of the proposed approach is tested using buildings with different architectural styles acquired by a distinct sensor. The results show that point cloud-based methods can achieve high accuracy and are robust under training data reduction. The sensitivity analysis reveals that the trained models are able to locate roof deformations precisely, but have difficulty recognizing global damage, such as that relating to the roof inclination. Additionally, it is revealed that the model decisions are overly dependent on debris-like objects when surroundings information is available, which leads to misclassifications. By training on the developed dataset, the model can achieve moderate accuracy on another dataset with different architectural styles without additional training.


Author(s):  
Yi-Chen Chen ◽  
Chao-Hung Lin

With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority over related methods.


2021 ◽  
Author(s):  
Kacper Pluta ◽  
Gisela Domej

<p>The process of transforming point cloud data into high-quality meshes or CAD objects is, in general, not a trivial task. Many problems, such as holes, enclosed pockets, or small tunnels, can occur during the surface reconstruction process, even if the point cloud is of excellent quality. These issues are often difficult to resolve automatically and may require detailed manual adjustments. Nevertheless, in this work, we present a semi-automatic pipeline that requires minimal user-provided input and still allows for high-quality surface reconstruction. Moreover, the presented pipeline can be successfully used by non-specialists and only relies commonly available tools.</p><p>Our pipeline consists of the following main steps: First, a normal field over the point cloud is estimated, and Screened Poisson Surface Reconstruction is applied to obtain the initial mesh. At this stage, the reconstructed mesh usually contains holes, small tunnels, and excess parts – i.e., surface parts that do not correspond to the point cloud geometry. In the next step, we apply morphological and geometrical filtering in order to resolve the problems mentioned before. Some fine details are also removed during the filtration process; however, we show how these can be restored – without reintroducing the problems – using a distance guided projection. In the last step, the filtered mesh is re-meshed to obtain a high-quality triangular mesh, which – if needed – can be converted to a CAD object represented by a small number of quadrangular NURBS patches.</p><p>Our workflow is designed for a point cloud recorded by a laser scanner inside one of seven artificially carved caves resembling chapels with several niches and passages to the outside of a sandstone hill slope in Georgia. We note that we have not tested the approach for other data. Nevertheless, we believe that a similar pipeline can be applied for other types of point cloud data, – e.g., natural caves or mining shafts, geotechnical constructions, rock cliffs, geo-archeological sites, etc. This workflow was created independently, it is not part of a funded project and does not advertise particular software. The case study's point cloud data was used by courtesy of the Dipartimento di Scienze dell'Ambiente e della Terra of the Università degli Studi di Milano–Bicocca.</p>


Author(s):  
Yi-Chen Chen ◽  
Chao-Hung Lin

With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority over related methods.


Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


2021 ◽  
Vol 13 (11) ◽  
pp. 2195
Author(s):  
Shiming Li ◽  
Xuming Ge ◽  
Shengfu Li ◽  
Bo Xu ◽  
Zhendong Wang

Today, mobile laser scanning and oblique photogrammetry are two standard urban remote sensing acquisition methods, and the cross-source point-cloud data obtained using these methods have significant differences and complementarity. Accurate co-registration can make up for the limitations of a single data source, but many existing registration methods face critical challenges. Therefore, in this paper, we propose a systematic incremental registration method that can successfully register MLS and photogrammetric point clouds in the presence of a large number of missing data, large variations in point density, and scale differences. The robustness of this method is due to its elimination of noise in the extracted linear features and its 2D incremental registration strategy. There are three main contributions of our work: (1) the development of an end-to-end automatic cross-source point-cloud registration method; (2) a way to effectively extract the linear feature and restore the scale; and (3) an incremental registration strategy that simplifies the complex registration process. The experimental results show that this method can successfully achieve cross-source data registration, while other methods have difficulty obtaining satisfactory registration results efficiently. Moreover, this method can be extended to more point-cloud sources.


Sign in / Sign up

Export Citation Format

Share Document