scholarly journals Automatic Workflow for Roof Extraction and Generation of 3D CityGML Models from Low-Cost UAV Image-Derived Point Clouds

2020 ◽  
Vol 9 (12) ◽  
pp. 743
Author(s):  
Arnadi Murtiyoso ◽  
Mirza Veriandi ◽  
Deni Suwardhi ◽  
Budhy Soeksmantono ◽  
Agung Budi Harto

Developments in UAV sensors and platforms in recent decades have stimulated an upsurge in its application for 3D mapping. The relatively low-cost nature of UAVs combined with the use of revolutionary photogrammetric algorithms, such as dense image matching, has made it a strong competitor to aerial lidar mapping. However, in the context of 3D city mapping, further 3D modeling is required to generate 3D city models which is often performed manually using, e.g., photogrammetric stereoplotting. The aim of the paper was to try to implement an algorithmic approach to building point cloud segmentation, from which an automated workflow for the generation of roof planes will also be presented. 3D models of buildings are then created using the roofs’ planes as a base, therefore satisfying the requirements for a Level of Detail (LoD) 2 in the CityGML paradigm. Consequently, the paper attempts to create an automated workflow starting from UAV-derived point clouds to LoD 2-compatible 3D model. Results show that the rule-based segmentation approach presented in this paper works well with the additional advantage of instance segmentation and automatic semantic attribute annotation, while the 3D modeling algorithm performs well for low to medium complexity roofs. The proposed workflow can therefore be implemented for simple roofs with a relatively low number of planar surfaces. Furthermore, the automated approach to the 3D modeling process also helps to maintain the geometric requirements of CityGML such as 3D polygon coplanarity vis-à-vis manual stereoplotting.

2020 ◽  
Vol 12 (16) ◽  
pp. 2624 ◽  
Author(s):  
Matias Ingman ◽  
Juho-Pekka Virtanen ◽  
Matti T. Vaaja ◽  
Hannu Hyyppä

The automated 3D modeling of indoor spaces is a rapidly advancing field, in which recent developments have made the modeling process more accessible to consumers by lowering the cost of instruments and offering a highly automated service for 3D model creation. We compared the performance of three low-cost sensor systems; one RGB-D camera, one low-end terrestrial laser scanner (TLS), and one panoramic camera, using a cloud-based processing service to automatically create mesh models and point clouds, evaluating the accuracy of the results against a reference point cloud from a higher-end TLS. While adequately accurate results could be obtained with all three sensor systems, the TLS performed the best both in terms of reconstructing the overall room geometry and smaller details, with the panoramic camera clearly trailing the other systems and the RGB-D offering a middle ground in terms of both cost and quality. The results demonstrate the attractiveness of fully automatic cloud-based indoor 3D modeling for low-cost sensor systems, with the latter providing better model accuracy and completeness, and with all systems offering a rapid rate of data acquisition through an easy-to-use interface.


2010 ◽  
Vol 25 (129) ◽  
pp. 5-23 ◽  
Author(s):  
Tarek M. Awwad ◽  
Qing Zhu ◽  
Zhiqiang Du ◽  
Yeting Zhang

Author(s):  
B. Sirmacek ◽  
R. Lindenbergh

Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (&amp;sigma;) of roughness histograms are calculated as (μ<sub>1</sub> = 0.44 m., &amp;sigma;<sub>1</sub> = 0.071 m.) and (μ<sub>2</sub> = 0.025 m., &amp;sigma;<sub>2</sub> = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.


Author(s):  
Ismail Elkhrachy

This paper analyses and evaluate the precision and the accuracy the capability of low-cost terrestrial photogrammetry by using many digital cameras to construct a 3D model of an object. To obtain the goal, a building façade has imaged by two inexpensive digital cameras such as Canon and Pentax camera. Bundle adjustment and image processing calculated by using Agisoft PhotScan software. Several factors will be included during this study, different cameras, and control points. Many photogrammetric point clouds will be generated. Their accuracy will be compared with some natural control points which collected by the laser total station of the same building. The cloud to cloud distance will be computed for different comparison 3D models to investigate different variables. The practical field experiment showed a spatial positioning reported by the investigated technique was between 2-4cm in the 3D coordinates of a façade. This accuracy is optimistic since the captured images were processed without any control points.


Author(s):  
L. Barazzetti ◽  
M. Previtali ◽  
F. Roncoroni

360 degree cameras capture the whole scene around a photographer in a single shot. Cheap 360 cameras are a new paradigm in photogrammetry. The camera can be pointed to any direction, and the large field of view reduces the number of photographs. This paper aims to show that accurate metric reconstructions can be achieved with affordable sensors (less than 300 euro). The camera used in this work is the Xiaomi Mijia Mi Sphere 360, which has a cost of about 300 USD (January 2018). Experiments demonstrate that millimeter-level accuracy can be obtained during the image orientation and surface reconstruction steps, in which the solution from 360&amp;deg; images was compared to check points measured with a total station and laser scanning point clouds. The paper will summarize some practical rules for image acquisition as well as the importance of ground control points to remove possible deformations of the network during bundle adjustment, especially for long sequences with unfavorable geometry. The generation of orthophotos from images having a 360&amp;deg; field of view (that captures the entire scene around the camera) is discussed. Finally, the paper illustrates some case studies where the use of a 360&amp;deg; camera could be a better choice than a project based on central perspective cameras. Basically, 360&amp;deg; cameras become very useful in the survey of long and narrow spaces, as well as interior areas like small rooms.


Sensors ◽  
2019 ◽  
Vol 19 (18) ◽  
pp. 3952 ◽  
Author(s):  
* ◽  
*

Three Dimensional (3D) models are widely used in clinical applications, geosciences, cultural heritage preservation, and engineering; this, together with new emerging needs such as building information modeling (BIM) develop new data capture techniques and devices with a low cost and reduced learning curve that allow for non-specialized users to employ it. This paper presents a simple, self-assembly device for 3D point clouds data capture with an estimated base price under €2500; furthermore, a workflow for the calculations is described that includes a Visual SLAM-photogrammetric threaded algorithm that has been implemented in C++. Another purpose of this work is to validate the proposed system in BIM working environments. To achieve it, in outdoor tests, several 3D point clouds were obtained and the coordinates of 40 points were obtained by means of this device, with data capture distances ranging between 5 to 20 m. Subsequently, those were compared to the coordinates of the same targets measured by a total station. The Euclidean average distance errors and root mean square errors (RMSEs) ranging between 12–46 mm and 8–33 mm respectively, depending on the data capture distance (5–20 m). Furthermore, the proposed system was compared with a commonly used photogrammetric methodology based on Agisoft Metashape software. The results obtained demonstrate that the proposed system satisfies (in each case) the tolerances of ‘level 1’ (51 mm) and ‘level 2’ (13 mm) for point cloud acquisition in urban design and historic documentation, according to the BIM Guide for 3D Imaging (U.S. General Services).


Author(s):  
M. Zacharek ◽  
P. Delis ◽  
M. Kedzierski ◽  
A. Fryskowska

These studies have been conductedusing non-metric digital camera and dense image matching algorithms, as non-contact methods of creating monuments documentation.In order toprocess the imagery, few open-source software and algorithms of generating adense point cloud from images have been executed. In the research, the OSM Bundler, VisualSFM software, and web application ARC3D were used. Images obtained for each of the investigated objects were processed using those applications, and then dense point clouds and textured 3D models were created. As a result of post-processing, obtained models were filtered and scaled.The research showedthat even using the open-source software it is possible toobtain accurate 3D models of structures (with an accuracy of a few centimeters), but for the purpose of documentation and conservation of cultural and historical heritage, such accuracy can be insufficient.


Author(s):  
N. Mostofi ◽  
A. Moussa ◽  
M. Elhabiby ◽  
N. El-Sheimy

3D model of indoor environments provide rich information that can facilitate the disambiguation of different places and increases the familiarization process to any indoor environment for the remote users. In this research work, we describe a system for visual odometry and 3D modeling using information from RGB-D sensor (Camera). The visual odometry method estimates the relative pose of the consecutive RGB-D frames through feature extraction and matching techniques. The pose estimated by visual odometry algorithm is then refined with iterative closest point (ICP) method. The switching technique between ICP and visual odometry in case of no visible features suppresses inconsistency in the final developed map. Finally, we add the loop closure to remove the deviation between first and last frames. In order to have a semantic meaning out of 3D models, the planar patches are segmented from RGB-D point clouds data using region growing technique followed by convex hull method to assign boundaries to the extracted patches. In order to build a final semantic 3D model, the segmented patches are merged using relative pose information obtained from the first step.


Author(s):  
M. Kedzierski ◽  
D. Wierzbickia ◽  
A. Fryskowska ◽  
B. Chlebowska

The laser scanning technique is still a very popular and fast growing method of obtaining information on modeling 3D objects. The use of low-cost miniature scanners creates new opportunities for small objects of 3D modeling based on point clouds acquired from the scan. The same, the development of accuracy and methods of automatic processing of this data type is noticeable. The article presents methods of collecting raw datasets in the form of a point-cloud using a low-cost ground-based laser scanner FabScan. As part of the research work 3D scanner from an open source FabLab project was constructed. In addition, the results for the analysis of the geometry of the point clouds obtained by using a low-cost laser scanner were presented. Also, some analysis of collecting data of different structures (made of various materials such as: glass, wood, paper, gum, plastic, plaster, ceramics, stoneware clay etc. and of different shapes: oval and similar to oval and prism shaped) have been done. The article presents two methods used for analysis: the first one - visual (general comparison between the 3D model and the real object) and the second one - comparative method (comparison between measurements on models and scanned objects using the mean error of a single sample of observations). The analysis showed, that the low-budget ground-based laser scanner FabScan has difficulties with collecting data of non-oval objects. Items built of glass painted black also caused problems for the scanner. In addition, the more details scanned object contains, the lower the accuracy of the collected point-cloud is. Nevertheless, the accuracy of collected data (using oval-straight shaped objects) is satisfactory. The accuracy, in this case, fluctuates between ± 0,4 mm and ± 1,0 mm whereas when using more detailed objects or a rectangular shaped prism the accuracy is much more lower, between 2,9 mm and ± 9,0 mm. Finally, the publication presents the possibility (for the future expansion of research) of modernization FabScan by the implementation of a larger amount of camera-laser units. This will enable spots the registration , that are less visible.


Author(s):  
F. He ◽  
A. Habib ◽  
A. Al-Rawabdeh

In this paper, we proposed a new refinement procedure for the semi-global dense image matching. In order to remove outliers and improve the disparity image derived from the semi-global algorithm, both the local smoothness constraint and point cloud segments are utilized. Compared with current refinement technique, which usually assumes the correspondences between planar surfaces and 2D image segments, our proposed approach can effectively deal with object with both planar and curved surfaces. Meanwhile, since 3D point clouds contain more precise geometric information regarding to the reconstructed objects, the planar surfaces identified in our approach can be more accurate. In order to illustrate the feasibility of our approach, several experimental tests are conducted on both Middlebury test and real UAV-image datasets. The results demonstrate that our approach has a good performance on improving the quality of the derived dense image-based point cloud.


Sign in / Sign up

Export Citation Format

Share Document