scholarly journals Segmentation and Reconstruction of Buildings with Aerial Oblique Photography Point Clouds

Author(s):  
P. Liu ◽  
Y. C. Li ◽  
W. Hu ◽  
X. B. Ding

Oblique photography technology as an excellent method for 3-D city model construction has brought itself to large-scale recognition and undeniable high social status. Tilt and vertical images with the high overlaps and different visual angles can produce a large number of dense matching point clouds data with spectral information. This paper presents a method of buildings reconstruction with stereo matching dense point clouds from aerial oblique images, which includes segmentation of buildings and reconstruction of building roofs. We summarize the characteristics of stereo matching point clouds from aerial oblique images and outline the problems with existing methods. Then we present the method for segmentation of building roofs, which based on colors and geometrical derivatives such as normal and curvature. Finally, a building reconstruction approach is developed based on the geometrical relationship. The experiment and analysis show that the methods are effective on building reconstruction with stereo matching point clouds from aerial oblique images.

Author(s):  
L. Gézero ◽  
C. Antunes

In the last few years, LiDAR sensors installed in terrestrial vehicles have been revealed as an efficient method to collect very dense 3D georeferenced information. The possibility of creating very dense point clouds representing the surface surrounding the sensor, at a given moment, in a very fast, detailed and easy way, shows the potential of this technology to be used for cartography and digital terrain models production in large scale. However, there are still some limitations associated with the use of this technology. When several acquisitions of the same area with the same device, are made, differences between the clouds can be observed. The range of that differences can go from few centimetres to some several tens of centimetres, mainly in urban and high vegetation areas where the occultation of the GNSS system introduces a degradation of the georeferenced trajectory. Along this article a different method point cloud registration is proposed. In addition to the efficiency and speed of execution, the main advantages of the method are related to the fact that the adjustment is continuously made over the trajectory, based on the GPS time. The process is fully automatic and only information recorded in the standard LAS files is used, without the need for any auxiliary information, in particular regarding the trajectory.


Author(s):  
S. Bullinger ◽  
C. Bodensteiner ◽  
M. Arens

Abstract. The reconstruction of accurate three-dimensional environment models is one of the most fundamental goals in the field of photogrammetry. Since satellite images provide suitable properties for obtaining large-scale environment reconstructions, there exist a variety of Stereo Matching based methods to reconstruct point clouds for satellite image pairs. Recently, a Structure from Motion (SfM) based approach has been proposed, which allows to reconstruct point clouds from multiple satellite images. In this work, we propose an extension of this SfM based pipeline that allows us to reconstruct not only point clouds but watertight meshes including texture information. We provide a detailed description of several steps that are mandatory to exploit state-of-the-art mesh reconstruction algorithms in the context of satellite imagery. This includes a decomposition of finite projective camera calibration matrices, a skew correction of corresponding depth maps and input images as well as the recovery of real-world depth maps from reparameterized depth values. The paper presents an extensive quantitative evaluation on multi-date satellite images demonstrating that the proposed pipeline combined with current meshing algorithms outperforms state-of-the-art point cloud reconstruction algorithms in terms of completeness and median error. We make the source code of our pipeline publicly available.


2021 ◽  
Author(s):  
Yipeng Yuan

Demand for three-dimensional (3D) urban models keeps growing in various civil and military applications. Topographic LiDAR systems are capable of acquiring elevation data directly over terrain features. However, the task of creating a large-scale virtual environment still remains a time-consuming and manual work. In this thesis a method for 3D building reconstruction, consisting of building roof detection, roof outline extraction and regularization, and 3D building model generation, directly from LiDAR point clouds is developed. In the proposed approach, a new algorithm called Gaussian Markov Random Field (GMRF) and Markov Chain Monte Carlo (MCMC) is used to segment point clouds for building roof detection. The modified convex hull (MCH) algorithm is used for the extraction of roof outlines followed by the regularization of the extracted outlines using the modified hierarchical regularization algorithm. Finally, 3D building models are generated in an ArcGIS environment. The results obtained demonstrate the effectiveness and satisfactory accuracy of the developed method.


Author(s):  
X. Huang ◽  
R. Qin ◽  
M. Chen

<p><strong>Abstract.</strong> Stereo dense matching has already been one of the dominant tools in 3D reconstruction of urban regions, due to its low cost and high flexibility in generating 3D points. However, the image-derived 3D points are often inaccurate around building edges, which limit its use in several vision tasks (e.g. building modelling). To generate 3D point clouds or digital surface models (DSM) with sharp boundaries, this paper integrates robustly matched lines for improving dense matching, and proposes a non-local disparity refinement of building edges through an iterative least squares plane adjustment approach. In our method, we first extract and match straight lines in images using epipolar constraints, then detect building edges from these straight lines by comparing matching results on both sides of straight lines, and finally we develop a non-local disparity refinement method through an iterative least squares plane adjustment constrained by matched straight lines to yield sharper and more accurate edges. Experiments conducted on both satellite and aerial data demonstrate that our proposed method is able to generate more accurate DSM with sharper object boundaries.</p>


Author(s):  
A. Murtiyoso ◽  
P. Grussenmeyer ◽  
T. Freville

Close-range photogrammetry is an image-based technique which has often been used for the 3D documentation of heritage objects. Recently, advances in the field of image processing and UAVs (Unmanned Aerial Vehicles) have resulted in a renewed interest in this technique. However, commercially ready-to-use UAVs are often equipped with smaller sensors in order to minimize payload and the quality of the documentation is still an issue. In this research, two commercial UAVs (the Sensefly Albris and DJI Phantom 3 Professional) were setup to record the 19<sup>th</sup> century St-Pierre-le-Jeune church in Strasbourg, France. Several software solutions (commercial and open source) were used to compare both UAVs’ images in terms of calibration, accuracy of external orientation, as well as dense matching. Results show some instability in regards to the calibration of Phantom 3, while the Albris had issues regarding its aerotriangulation results. Despite these shortcomings, both UAVs succeeded in producing dense point clouds of up to a few centimeters in accuracy, which is largely sufficient for the purposes of a city 3D GIS (Geographical Information System). The acquisition of close range images using UAVs also provides greater LoD flexibility in processing. These advantages over other methods such as the TLS (Terrestrial Laser Scanning) or terrestrial close range photogrammetry can be exploited in order for these techniques to complement each other.


2020 ◽  
Vol 9 (4) ◽  
pp. 283
Author(s):  
Ting Chen ◽  
Haiqing He ◽  
Dajun Li ◽  
Puyang An ◽  
Zhenyang Hui

The all-embracing inspection of geometry structures of revetments along urban rivers using the conventional field visual inspection is technically complex and time-consuming. In this study, an approach using dense point clouds derived from low-cost unmanned aerial vehicle (UAV) photogrammetry is proposed to automatically and efficiently recognize the signatures of revetment damage. To quickly and accurately recover the finely detailed surface of a revetment, an object space-based dense matching approach, that is, region growing coupled with semi-global matching, is exploited to generate pixel-by-pixel dense point clouds for characterizing the signatures of revetment damage. Then, damage recognition is conducted using a proposed operator, that is, a self-adaptive and multiscale gradient operator, which is designed to extract the damaged regions with different sizes in the slope intensity image of the revetment. A revetment with slope protection along urban rivers is selected to evaluate the performance of damage recognition. Results indicate that the proposed approach can be considered an effective alternative to field visual inspection for revetment damage recognition along urban rivers because our method not only recovers the finely detailed surface of the revetment but also remarkably improves the accuracy of revetment damage recognition.


2020 ◽  
Vol 10 (4) ◽  
pp. 1275
Author(s):  
Zizhuang Wei ◽  
Yao Wang ◽  
Hongwei Yi ◽  
Yisong Chen ◽  
Guoping Wang

Semantic modeling is a challenging task that has received widespread attention in recent years. With the help of mini Unmanned Aerial Vehicles (UAVs), multi-view high-resolution aerial images of large-scale scenes can be conveniently collected. In this paper, we propose a semantic Multi-View Stereo (MVS) method to reconstruct 3D semantic models from 2D images. Firstly, 2D semantic probability distribution is obtained by Convolutional Neural Network (CNN). Secondly, the calibrated cameras poses are determined by Structure from Motion (SfM), while the depth maps are estimated by learning MVS. Combining 2D segmentation and 3D geometry information, dense point clouds with semantic labels are generated by a probability-based semantic fusion method. In the final stage, the coarse 3D semantic point cloud is optimized by both local and global refinements. By making full use of the multi-view consistency, the proposed method efficiently produces a fine-level 3D semantic point cloud. The experimental result evaluated by re-projection maps achieves 88.4% Pixel Accuracy on the Urban Drone Dataset (UDD). In conclusion, our graph-based semantic fusion procedure and refinement based on local and global information can suppress and reduce the re-projection error.


Author(s):  
Jian Wu ◽  
Qingxiong Yang

In this paper, we study the semantic segmentation of 3D LiDAR point cloud data in urban environments for autonomous driving, and a method utilizing the surface information of the ground plane was proposed. In practice, the resolution of a LiDAR sensor installed in a self-driving vehicle is relatively low and thus the acquired point cloud is indeed quite sparse. While recent work on dense point cloud segmentation has achieved promising results, the performance is relatively low when directly applied to sparse point clouds. This paper is focusing on semantic segmentation of the sparse point clouds obtained from 32-channel LiDAR sensor with deep neural networks. The main contribution is the integration of the ground information which is used to group ground points far away from each other. Qualitative and quantitative experiments on two large-scale point cloud datasets show that the proposed method outperforms the current state-of-the-art.


2016 ◽  
Vol 3 (4) ◽  
pp. 322-329 ◽  
Author(s):  
Akisato Chida ◽  
Hiroshi Masuda

Abstract The advent of high-performance terrestrial laser scanners has made it possible to capture dense point-clouds of engineering facilities. 3D shape acquisition from engineering facilities is useful for supporting maintenance and repair tasks. In this paper, we discuss methods to reconstruct box shapes and polygonal prisms from large-scale point-clouds. Since many faces may be partly occluded by other objects in engineering plants, we estimate possible box shapes and polygonal prisms and verify their compatibility with measured point-clouds. We evaluate our method using actual point-clouds of engineering plants. Highlights This paper proposes a point-based reconstruction method for boxes and polygonal prisms in engineering plants. Many faces may be partly occluded by other objects in engineering plants. In our method, possible shapes are estimated and they are verified using their compatibility with measured point-clouds. In our experiments, our method achieved high precision and recall rates.


Sign in / Sign up

Export Citation Format

Share Document