scholarly journals 3D THERMAL MAPPING OF BUILDING ROOFS BASED ON FUSION OF THERMAL AND VISIBLE POINT CLOUDS IN UAV IMAGERY

Author(s):  
M. Dahaghin ◽  
F. Samadzadegan ◽  
F. Dadras Javan

Abstract. Thermography is a robust method for detecting thermal irregularities on the roof of the buildings as one of the main energy dissipation parts. Recently, UAVs are presented to be useful in gathering 3D thermal data of the building roofs. In this topic, the low spatial resolution of thermal imagery is a challenge which leads to a sparse resolution in point clouds. This paper suggests the fusion of visible and thermal point clouds to generate a high-resolution thermal point cloud of the building roofs. For the purpose, camera calibration is performed to obtain internal orientation parameters, and then thermal point clouds and visible point clouds are generated. In the next step, both two point clouds are geo-referenced by control points. To extract building roofs from the visible point cloud, CSF ground filtering is applied, and the vegetation layer is removed by RGBVI index. Afterward, a predefined threshold is applied to the normal vectors in the z-direction in order to separate facets of roofs from the walls. Finally, the visible point cloud of the building roofs and registered thermal point cloud are combined and generate a fused dense point cloud. Results show mean re-projection error of 0.31 pixels for thermal camera calibration and mean absolute distance of 0.2 m for point clouds registration. The final product is a fused point cloud, which its density improves up to twice of the initial thermal point cloud density and it has the spatial accuracy of visible point cloud along with thermal information of the building roofs.

2020 ◽  
Vol 12 (14) ◽  
pp. 2268
Author(s):  
Tian Zhou ◽  
Seyyed Meghdad Hasheminasab ◽  
Radhika Ravi ◽  
Ayman Habib

Unmanned aerial vehicles (UAVs) are quickly emerging as a popular platform for 3D reconstruction/modeling in various applications such as precision agriculture, coastal monitoring, and emergency management. For such applications, LiDAR and frame cameras are the two most commonly used sensors for 3D mapping of the object space. For example, point clouds for the area of interest can be directly derived from LiDAR sensors onboard UAVs equipped with integrated global navigation satellite systems and inertial navigation systems (GNSS/INS). Imagery-based mapping, on the other hand, is considered to be a cost-effective and practical option and is often conducted by generating point clouds and orthophotos using structure from motion (SfM) techniques. Mapping with photogrammetric approaches requires accurate camera interior orientation parameters (IOPs), especially when direct georeferencing is utilized. Most state-of-the-art approaches for determining/refining camera IOPs depend on ground control points (GCPs). However, establishing GCPs is expensive and labor-intensive, and more importantly, the distribution and number of GCPs are usually less than optimal to provide adequate control for determining and/or refining camera IOPs. Moreover, consumer-grade cameras with unstable IOPs have been widely used for mapping applications. Therefore, in such scenarios, where frequent camera calibration or IOP refinement is required, GCP-based approaches are impractical. To eliminate the need for GCPs, this study uses LiDAR data as a reference surface to perform in situ refinement of camera IOPs. The proposed refinement strategy is conducted in three main steps. An image-based sparse point cloud is first generated via a GNSS/INS-assisted SfM strategy. Then, LiDAR points corresponding to the resultant image-based sparse point cloud are identified through an iterative plane fitting approach and are referred to as LiDAR control points (LCPs). Finally, IOPs of the utilized camera are refined through a GNSS/INS-assisted bundle adjustment procedure using LCPs. Seven datasets over two study sites with a variety of geomorphic features are used to evaluate the performance of the developed strategy. The results illustrate the ability of the proposed approach to achieve an object space absolute accuracy of 3–5 cm (i.e., 5–10 times the ground sampling distance) at a 41 m flying height.


2020 ◽  
Vol 10 (4) ◽  
pp. 1275
Author(s):  
Zizhuang Wei ◽  
Yao Wang ◽  
Hongwei Yi ◽  
Yisong Chen ◽  
Guoping Wang

Semantic modeling is a challenging task that has received widespread attention in recent years. With the help of mini Unmanned Aerial Vehicles (UAVs), multi-view high-resolution aerial images of large-scale scenes can be conveniently collected. In this paper, we propose a semantic Multi-View Stereo (MVS) method to reconstruct 3D semantic models from 2D images. Firstly, 2D semantic probability distribution is obtained by Convolutional Neural Network (CNN). Secondly, the calibrated cameras poses are determined by Structure from Motion (SfM), while the depth maps are estimated by learning MVS. Combining 2D segmentation and 3D geometry information, dense point clouds with semantic labels are generated by a probability-based semantic fusion method. In the final stage, the coarse 3D semantic point cloud is optimized by both local and global refinements. By making full use of the multi-view consistency, the proposed method efficiently produces a fine-level 3D semantic point cloud. The experimental result evaluated by re-projection maps achieves 88.4% Pixel Accuracy on the Urban Drone Dataset (UDD). In conclusion, our graph-based semantic fusion procedure and refinement based on local and global information can suppress and reduce the re-projection error.


Author(s):  
C. Altuntas

<p><strong>Abstract.</strong> Image based dense point cloud creation is easy and low-cost application for three dimensional digitization of small and large scale objects and surfaces. It is especially attractive method for cultural heritage documentation. Reprojection error on conjugate keypoints indicates accuracy of the model and keypoint localisation in this method. In addition, sequential registration of the images from large scale historical buildings creates big cumulative registration error. Thus, accuracy of the model should be increased with the control points or loop close imaging. The registration of point point cloud model into the georeference system is performed using control points. In this study historical Sultan Selim Mosque that was built in sixteen century by Great Architect Sinan was modelled via photogrammetric dense point cloud. The reprojection error and number of keypoints were evaluated for different base/length ratio. In addition, georeferencing accuracy was evaluated with many configuration of control points with loop and without loop closure imaging.</p>


2020 ◽  
Vol 12 (18) ◽  
pp. 2923
Author(s):  
Tengfei Zhou ◽  
Xiaojun Cheng ◽  
Peng Lin ◽  
Zhenlun Wu ◽  
Ensheng Liu

Due to the existence of environmental or human factors, and because of the instrument itself, there are many uncertainties in point clouds, which directly affect the data quality and the accuracy of subsequent processing, such as point cloud segmentation, 3D modeling, etc. In this paper, to address this problem, stochastic information of point cloud coordinates is taken into account, and on the basis of the scanner observation principle within the Gauss–Helmert model, a novel general point-based self-calibration method is developed for terrestrial laser scanners, incorporating both five additional parameters and six exterior orientation parameters. For cases where the instrument accuracy is different from the nominal ones, the variance component estimation algorithm is implemented for reweighting the outliers after the residual errors of observations obtained. Considering that the proposed method essentially is a nonlinear model, the Gauss–Newton iteration method is applied to derive the solutions of additional parameters and exterior orientation parameters. We conducted experiments using simulated and real data and compared them with those two existing methods. The experimental results showed that the proposed method could improve the point accuracy from 10−4 to 10−8 (a priori known) and 10−7 (a priori unknown), and reduced the correlation among the parameters (approximately 60% of volume). However, it is undeniable that some correlations increased instead, which is the limitation of the general method.


Author(s):  
Tao Peng ◽  
Satyandra K. Gupta

Point cloud acquisition using digital fringe projection (PCCDFP) is a non-contact technique for acquiring dense point clouds to represent the 3-D shapes of objects. Most existing PCCDFP systems use projection patterns consisting of straight fringes with fixed fringe pitches. In certain situations, such patterns do not give the best results. In our earlier work, we have shown that in some situations, patterns that use curved fringes with spatial pitch variation can significantly improve the process of constructing point clouds. This paper describes algorithms for automatically generating adaptive projection patterns that use curved fringes with spatial pitch variation to provide improved results for an object being measured. In addition, we also describe the supporting algorithms that are needed for utilizing adaptive projection patterns. Both simulation and physical experiments show that, adaptive patterns are able to achieve improved performance, in terms of measurement accuracy and coverage, than fixed-pitch straight fringe patterns.


Author(s):  
Jinglu Wang ◽  
Bo Sun ◽  
Yan Lu

In this paper, we address the problem of reconstructing an object’s surface from a single image using generative networks. First, we represent a 3D surface with an aggregation of dense point clouds from multiple views. Each point cloud is embedded in a regular 2D grid aligned on an image plane of a viewpoint, making the point cloud convolution-favored and ordered so as to fit into deep network architectures. The point clouds can be easily triangulated by exploiting connectivities of the 2D grids to form mesh-based surfaces. Second, we propose an encoder-decoder network that generates such kind of multiple view-dependent point clouds from a single image by regressing their 3D coordinates and visibilities. We also introduce a novel geometric loss that is able to interpret discrepancy over 3D surfaces as opposed to 2D projective planes, resorting to the surface discretization on the constructed meshes. We demonstrate that the multi-view point regression network outperforms state-of-the-art methods with a significant improvement on challenging datasets.


2014 ◽  
Vol 536-537 ◽  
pp. 213-217
Author(s):  
Meng Qiang Zhu ◽  
Jie Yang

This paper takes the following measures to solve the problem of 3D reconstruction. Camera calibration is based on chessboard, taking several different attitude images. Use corner point coordinates by corner detection to process camera calibration. The calibration result is important to be used to correct the distorted image. Next, the left and right images should be matched to find out the object surface points’ imaging position respectively so that the object depth can be calculated by triangulation. According to the inverse process of projection mapping, we can project the object depth and disparity information into 3D space. As a result, we can obtain dense point cloud, which is ready for 3D reconstruction.


Author(s):  
K. Thoeni ◽  
A. Giacomini ◽  
R. Murtagh ◽  
E. Kniest

This work presents a comparative study between multi-view 3D reconstruction using various digital cameras and a terrestrial laser scanner (TLS). Five different digital cameras were used in order to estimate the limits related to the camera type and to establish the minimum camera requirements to obtain comparable results to the ones of the TLS. The cameras used for this study range from commercial grade to professional grade and included a GoPro Hero 1080 (5 Mp), iPhone 4S (8 Mp), Panasonic Lumix LX5 (9.5 Mp), Panasonic Lumix ZS20 (14.1 Mp) and Canon EOS 7D (18 Mp). The TLS used for this work was a FARO Focus 3D laser scanner with a range accuracy of ±2 mm. The study area is a small rock wall of about 6 m height and 20 m length. The wall is partly smooth with some evident geological features, such as non-persistent joints and sharp edges. Eight control points were placed on the wall and their coordinates were measured by using a total station. These coordinates were then used to georeference all models. A similar number of images was acquired from a distance of between approximately 5 to 10 m, depending on field of view of each camera. The commercial software package PhotoScan was used to process the images, georeference and scale the models, and to generate the dense point clouds. Finally, the open-source package CloudCompare was used to assess the accuracy of the multi-view results. Each point cloud obtained from a specific camera was compared to the point cloud obtained with the TLS. The latter is taken as ground truth. The result is a coloured point cloud for each camera showing the deviation in relation to the TLS data. The main goal of this study is to quantify the quality of the multi-view 3D reconstruction results obtained with various cameras as objectively as possible and to evaluate its applicability to geotechnical problems.


Author(s):  
C. Vasilakos ◽  
S. Chatzistamatis ◽  
O. Roussou ◽  
N. Soulakellis

<p><strong>Abstract.</strong> Building damage assessment caused by earthquakes is essential during the response phase following a catastrophic event. Modern techniques include terrestrial and aerial photogrammetry based on Structure from Motion algorithm and Laser Scanning with the latter to prove its superiority in accuracy assessment due to the high-density point clouds. However, standardized procedures during emergency surveys often could not be followed due to restrictions of outdoor operations because of debris or decrepit buildings, the high human presence of civil protection agencies, expedited deployment of survey team and cost of operations. The aim of this paper is to evaluate whether terrestrial photogrammetry based on a handheld amateur DSLR camera can be used to map building damages, structural deformations and facade production in an accepted accuracy comparing to laser scanning technique. The study area is the Vrisa village, Lesvos, Greece where a Mw&amp;thinsp;6.3 earthquake occurred on June 12th, 2017. A dense point cloud from some digital images created based on Structure from Motion algorithm and compared with a dense point cloud acquired by a laser scanner. The distance measurement and the comparison were conducted with the Multiscale Model to Model Cloud Comparison method. According to the results, the mean of the absolute distances between the two clouds is 0.038&amp;thinsp;m while the 94.9&amp;thinsp;% of the point distances are less than 0.1&amp;thinsp;m. Terrestrial photogrammetry proved to be an accurate methodology for rapid earthquake damage assessment thus its products were used by local authorities for the calculation of the compensation for the property loss.</p>


Author(s):  
O. Al Khalil ◽  
P. Grussenmeyer

<p><strong>Abstract.</strong> The paper explores the possibilities of using old images for 2D and 3D documentation of archaeological monuments using open source, free and commercial photogrammetric software. The available images represent the external façade of the Western gate and Al Omari Mosque in the city of Bosra al-Sham in Syria, which were severely damaged during the recent war. The images were captured using consumer camera and they were originally used to achieve 2D documentation for each part of the gate separately. 2D control points were used to scale the digital photomosaic and reference distances were applied for the scaling of the 3D models. Archive images were used to produce a 2D digital photomosaic of the monument by image rectification and 3D dense point clouds by applying Structure from Motion (SfM) techniques. The geometric accuracy of the results has been assessed.</p>


Sign in / Sign up

Export Citation Format

Share Document