Building Facade Reconstruction Using Crowd- Sourced Photos and Two-Dimensional Maps

2020 ◽  
Vol 86 (11) ◽  
pp. 677-694
Author(s):  
Jie Wu ◽  
Junya Mao ◽  
Song Chen ◽  
Gesang Zhuoma ◽  
Liang Cheng ◽  
...  

To address the high-cost problem of the current three-dimensional (<small>3D</small>) reconstruction for urban buildings, a new technical framework is proposed to generate <small>3D</small> building facade information using crowd-sourced photos and two-dimensional (2D) building vector data in this paper. The crowd-sourced photos mainly consisted of Tencent street view images and other-source photos, which were collected from three platforms, including search engines, social media, and mobile phones. The photos were selected and grouped first, and then a structure from motion algorithm was used for <small>3D</small> reconstruction. Finally, the reconstructed point clouds were registered with 2D building vector data. The test implementation was conducted in the Jianye District of Nanjing, China, and the generated point clouds showed a good fit with the true values. The proposed <small>3D</small> reconstruction method represents a multi-sourced data integration process. The advantage of the proposed approach lies in the open source and low-cost data used in this study.

2019 ◽  
Vol 93 (3) ◽  
pp. 411-429 ◽  
Author(s):  
Maria Immacolata Marzulli ◽  
Pasi Raumonen ◽  
Roberto Greco ◽  
Manuela Persia ◽  
Patrizia Tartarino

Abstract Methods for the three-dimensional (3D) reconstruction of forest trees have been suggested for data from active and passive sensors. Laser scanner technologies have become popular in the last few years, despite their high costs. Since the improvements in photogrammetric algorithms (e.g. structure from motion—SfM), photographs have become a new low-cost source of 3D point clouds. In this study, we use images captured by a smartphone camera to calculate dense point clouds of a forest plot using SfM. Eighteen point clouds were produced by changing the densification parameters (Image scale, Point density, Minimum number of matches) in order to investigate their influence on the quality of the point clouds produced. In order to estimate diameter at breast height (d.b.h.) and stem volumes, we developed an automatic method that extracts the stems from the point cloud and then models them with cylinders. The results show that Image scale is the most influential parameter in terms of identifying and extracting trees from the point clouds. The best performance with cylinder modelling from point clouds compared to field data had an RMSE of 1.9 cm and 0.094 m3, for d.b.h. and volume, respectively. Thus, for forest management and planning purposes, it is possible to use our photogrammetric and modelling methods to measure d.b.h., stem volume and possibly other forest inventory metrics, rapidly and without felling trees. The proposed methodology significantly reduces working time in the field, using ‘non-professional’ instruments and automating estimates of dendrometric parameters.


Author(s):  
T. Guo ◽  
A. Capra ◽  
M. Troyer ◽  
A. Gruen ◽  
A. J. Brooks ◽  
...  

Recent advances in automation of photogrammetric 3D modelling software packages have stimulated interest in reconstructing highly accurate 3D object geometry in unconventional environments such as underwater utilizing simple and low-cost camera systems. The accuracy of underwater 3D modelling is affected by more parameters than in single media cases. This study is part of a larger project on 3D measurements of temporal change of coral cover in tropical waters. It compares the accuracies of 3D point clouds generated by using images acquired from a system camera mounted in an underwater housing and the popular GoPro cameras respectively. A precisely measured calibration frame was placed in the target scene in order to provide accurate control information and also quantify the errors of the modelling procedure. In addition, several objects (cinder blocks) with various shapes were arranged in the air and underwater and 3D point clouds were generated by automated image matching. These were further used to examine the relative accuracy of the point cloud generation by comparing the point clouds of the individual objects with the objects measured by the system camera in air (the best possible values). Given a working distance of about 1.5 m, the GoPro camera can achieve a relative accuracy of 1.3 mm in air and 2.0 mm in water. The system camera achieved an accuracy of 1.8 mm in water, which meets our requirements for coral measurement in this system.


2018 ◽  
Vol 10 (12) ◽  
pp. 1869 ◽  
Author(s):  
Nicolás Corti Meneses ◽  
Florian Brunner ◽  
Simon Baier ◽  
Juergen Geist ◽  
Thomas Schneider

Quantification of reed coverage and vegetation status is fundamental for monitoring and developing lake conservation strategies. The applicability of Unmanned Aerial Vehicles (UAV) three-dimensional data (point clouds) for status evaluation was investigated. This study focused on mapping extent, density, and vegetation status of aquatic reed beds. Point clouds were calculated with Structure from Motion (SfM) algorithms in aerial imagery recorded with Rotary Wing (RW) and Fixed Wing (FW) UAV. Extent was quantified by measuring the surface between frontline and shoreline. Density classification was based on point geometry (height and height variance) in point clouds. Spectral information per point was used for calculating a vegetation index and was used as indicator for vegetation vitality. Status was achieved by combining data on density, vitality, and frontline shape outputs. Field observations in areas of interest (AOI) and optical imagery were used for reference and validation purposes. A root mean square error (RMSE) of 1.58 m to 3.62 m for cross sections from field measurements and classification was achieved for extent map. The overall accuracy (OA) acquired for density classification was 88.6% (Kappa = 0.8). The OA for status classification of 83.3% (Kappa = 0.7) was reached by comparison with field measurements complemented by secondary Red, Green, Blue (RGB) data visual assessments. The research shows that complex transitional zones (water–vegetation–land) can be assessed and support the suitability of the applied method providing new strategies for monitoring aquatic reed bed using low-cost UAV imagery.


2019 ◽  
Vol 11 (10) ◽  
pp. 1204 ◽  
Author(s):  
Yue Pan ◽  
Yiqing Dong ◽  
Dalei Wang ◽  
Airong Chen ◽  
Zhen Ye

Three-dimensional (3D) digital technology is essential to the maintenance and monitoring of cultural heritage sites. In the field of bridge engineering, 3D models generated from point clouds of existing bridges is drawing increasing attention. Currently, the widespread use of the unmanned aerial vehicle (UAV) provides a practical solution for generating 3D point clouds as well as models, which can drastically reduce the manual effort and cost involved. In this study, we present a semi-automated framework for generating structural surface models of heritage bridges. To be specific, we propose to tackle this challenge via a novel top-down method for segmenting main bridge components, combined with rule-based classification, to produce labeled 3D models from UAV photogrammetric point clouds. The point clouds of the heritage bridge are generated from the captured UAV images through the structure-from-motion workflow. A segmentation method is developed based on the supervoxel structure and global graph optimization, which can effectively separate bridge components based on geometric features. Then, recognition by the use of a classification tree and bridge geometry is utilized to recognize different structural elements from the obtained segments. Finally, surface modeling is conducted to generate surface models of the recognized elements. Experiments using two bridges in China demonstrate the potential of the presented structural model reconstruction method using UAV photogrammetry and point cloud processing in 3D digital documentation of heritage bridges. By using given markers, the reconstruction error of point clouds can be as small as 0.4%. Moreover, the precision and recall of segmentation results using testing date are better than 0.8, and a recognition accuracy better than 0.8 is achieved.


Geosphere ◽  
2019 ◽  
Vol 15 (4) ◽  
pp. 1393-1408 ◽  
Author(s):  
Reuben J. Hansman ◽  
Uwe Ring

AbstractGeological field mapping is a vital first step in understanding geological processes. During the 20th century, mapping was revolutionized through advances in remote sensing technology. With the recent availability of low-cost remotely piloted aircraft (RPA), field geologists now routinely carry out aerial imaging without the need to use satellite, helicopter, or airplane systems. RPA photographs are processed by photo-based three-dimensional (3-D) reconstruction software, which uses structure-from-motion and multi-view stereo algorithms to create an ultra-high-resolution, 3-D point cloud of a region or target outcrop. These point clouds are analyzed to extract the orientation of geological structures and strata, and are also used to create digital elevation models and photorealistic 3-D models. However, this technique has only recently been used for structural mapping. Here, we outline a workflow starting with RPA data acquisition, followed by photo-based 3-D reconstruction, and ending with a 3-D geological model. The Jabal Hafit anticline in the United Arab Emirates was selected to demonstrate this workflow. At this anticline, outcrop exposure is excellent and the terrain is challenging to navigate due to areas of high relief. This makes for an ideal RPA mapping site and provides a good indication of how practical this method may be for the field geologist. Results confirm that RPA photo-based 3-D reconstruction mapping is an accurate and cost-efficient remote sensing method for geological mapping.


Author(s):  
J. Chen ◽  
O. E. Mora ◽  
K. C. Clarke

<p><strong>Abstract.</strong> In recent years, growing public interest in three-dimensional technology has led to the emergence of affordable platforms that can capture 3D scenes for use in a wide range of consumer applications. These platforms are often widely available, inexpensive, and can potentially find dual use in taking measurements of indoor spaces for creating indoor maps. Their affordability, however, usually comes at the cost of reduced accuracy and precision, which becomes more apparent when these instruments are pushed to their limits to scan an entire room. The point cloud measurements they produce often exhibit systematic drift and random noise that can make performing comparisons with accurate data difficult, akin to trying to compare a fuzzy trapezoid to a perfect square with sharp edges. This paper outlines a process for assessing the accuracy and precision of these imperfect point clouds in the context of indoor mapping by integrating techniques such as the extended Gaussian image, iterative closest point registration, and histogram thresholding. A case study is provided at the end to demonstrate use of this process for evaluating the performance of the Scanse Sweep 3D, an ultra-low cost panoramic laser scanner.</p>


Author(s):  
Ismail Elkhrachy

This paper analyses and evaluate the precision and the accuracy the capability of low-cost terrestrial photogrammetry by using many digital cameras to construct a 3D model of an object. To obtain the goal, a building façade has imaged by two inexpensive digital cameras such as Canon and Pentax camera. Bundle adjustment and image processing calculated by using Agisoft PhotScan software. Several factors will be included during this study, different cameras, and control points. Many photogrammetric point clouds will be generated. Their accuracy will be compared with some natural control points which collected by the laser total station of the same building. The cloud to cloud distance will be computed for different comparison 3D models to investigate different variables. The practical field experiment showed a spatial positioning reported by the investigated technique was between 2-4cm in the 3D coordinates of a façade. This accuracy is optimistic since the captured images were processed without any control points.


Author(s):  
Zihan Liu ◽  
Guanghong Gong ◽  
Ni Li ◽  
Zihao Yu

Three-dimensional (3D) reconstruction of a human head with high precision has promising applications in scientific research, product design and other fields. However, it still faces resistance from two factors. One is inaccurate registration caused by symmetrical distribution of head feature points, and the other is economic burden due to high-accuracy sensors. Research on 3D reconstruction with portable consumer RGB-D sensors such as the Microsoft Kinect has been highlighted in recent years. Based on our multi-Kinect system, a precise and low-cost three-dimensional modeling method and its system implementation are introduced in this paper. A registration method for multi-source point clouds is provided, which can reduce the fusion differences and reconstruct the head model accurately. In addition, a template-based texture generation algorithm is presented to generate a fine texture. The comparison and analysis of our experiments show that our method can reconstruct a head model in an acceptable time with less memory and better effect.


Sign in / Sign up

Export Citation Format

Share Document