scholarly journals Refining the Joint 3D Processing of Terrestrial and UAV Images Using Quality Measures

2020 ◽  
Vol 12 (18) ◽  
pp. 2873 ◽  
Author(s):  
Elisa Mariarosaria Farella ◽  
Alessandro Torresani ◽  
Fabio Remondino

The paper presents an efficient photogrammetric workflow to improve the 3D reconstruction of scenes surveyed by integrating terrestrial and Unmanned Aerial Vehicle (UAV) images. In the last years, the integration of this kind of images has shown clear advantages for the complete and detailed 3D representation of large and complex scenarios. Nevertheless, their photogrammetric integration often raises several issues in the image orientation and dense 3D reconstruction processes. Noisy and erroneous 3D reconstructions are the typical result of inaccurate orientation results. In this work, we propose an automatic filtering procedure which works at the sparse point cloud level and takes advantage of photogrammetric quality features. The filtering step removes low-quality 3D tie points before refining the image orientation in a new adjustment and generating the final dense point cloud. Our method generalizes to many datasets, as it employs statistical analyses of quality feature distributions to identify suitable filtering thresholds. Reported results show the effectiveness and reliability of the method verified using both internal and external quality checks, as well as visual qualitative comparisons. We made the filtering tool publicly available on GitHub.

Author(s):  
V. Lambey ◽  
A. D. Prasad

<p><strong>Abstract.</strong> Photogrammetric surveying with the use of Unmanned Aerial Vehicles (UAV) have gained vast popularity in short span. UAV have the potential to provide imagery at an extraordinary spatial and temporal resolution when coupled with remote sensing. Currently, UAV platforms are fastest and easiest source of data for mapping and 3D modelling. It is to be considered as a low-cost substitute to the traditional airborne photogrammetry. In the present study, UAV applications are explored in terms of 3D modelling, visualization and parameter calculations. National Institute of Technology Raipur, Raipur is chosen as study area and high resolution images are acquired from the UAV with 85% overlap. 3D model is processed through the point cloud generated for the UAV images. The results are compared with traditional methods for validation. The average accuracy obtained for elevation points and area is 97.99% and 97.75%. The study proves that UAV based surveying is an economical alternative in terms of money, time and resources, when compared to the classical aerial photogrammetry methods.</p>


2014 ◽  
Vol 536-537 ◽  
pp. 213-217
Author(s):  
Meng Qiang Zhu ◽  
Jie Yang

This paper takes the following measures to solve the problem of 3D reconstruction. Camera calibration is based on chessboard, taking several different attitude images. Use corner point coordinates by corner detection to process camera calibration. The calibration result is important to be used to correct the distorted image. Next, the left and right images should be matched to find out the object surface points’ imaging position respectively so that the object depth can be calculated by triangulation. According to the inverse process of projection mapping, we can project the object depth and disparity information into 3D space. As a result, we can obtain dense point cloud, which is ready for 3D reconstruction.


Author(s):  
F. Alidoost ◽  
H. Arefi

Nowadays, with the development of the urban areas, the automatic reconstruction of the buildings, as an important objects of the city complex structures, became a challenging topic in computer vision and photogrammetric researches. In this paper, the capability of multi-view Unmanned Aerial Vehicles (UAVs) images is examined to provide a 3D model of complex building façades using an efficient image-based modelling workflow. The main steps of this work include: pose estimation, point cloud generation, and 3D modelling. After improving the initial values of interior and exterior parameters at first step, an efficient image matching technique such as Semi Global Matching (SGM) is applied on UAV images and a dense point cloud is generated. Then, a mesh model of points is calculated using Delaunay 2.5D triangulation and refined to obtain an accurate model of building. Finally, a texture is assigned to mesh in order to create a realistic 3D model. The resulting model has provided enough details of building based on visual assessment.


Author(s):  
K. Thoeni ◽  
A. Giacomini ◽  
R. Murtagh ◽  
E. Kniest

This work presents a comparative study between multi-view 3D reconstruction using various digital cameras and a terrestrial laser scanner (TLS). Five different digital cameras were used in order to estimate the limits related to the camera type and to establish the minimum camera requirements to obtain comparable results to the ones of the TLS. The cameras used for this study range from commercial grade to professional grade and included a GoPro Hero 1080 (5 Mp), iPhone 4S (8 Mp), Panasonic Lumix LX5 (9.5 Mp), Panasonic Lumix ZS20 (14.1 Mp) and Canon EOS 7D (18 Mp). The TLS used for this work was a FARO Focus 3D laser scanner with a range accuracy of ±2 mm. The study area is a small rock wall of about 6 m height and 20 m length. The wall is partly smooth with some evident geological features, such as non-persistent joints and sharp edges. Eight control points were placed on the wall and their coordinates were measured by using a total station. These coordinates were then used to georeference all models. A similar number of images was acquired from a distance of between approximately 5 to 10 m, depending on field of view of each camera. The commercial software package PhotoScan was used to process the images, georeference and scale the models, and to generate the dense point clouds. Finally, the open-source package CloudCompare was used to assess the accuracy of the multi-view results. Each point cloud obtained from a specific camera was compared to the point cloud obtained with the TLS. The latter is taken as ground truth. The result is a coloured point cloud for each camera showing the deviation in relation to the TLS data. The main goal of this study is to quantify the quality of the multi-view 3D reconstruction results obtained with various cameras as objectively as possible and to evaluate its applicability to geotechnical problems.


2021 ◽  
Vol 13 (23) ◽  
pp. 4811
Author(s):  
Rudolf Urban ◽  
Martin Štroner ◽  
Lenka Línková

Lately, affordable unmanned aerial vehicle (UAV)-lidar systems have started to appear on the market, highlighting the need for methods facilitating proper verification of their accuracy. However, the dense point cloud produced by such systems makes the identification of individual points that could be used as reference points difficult. In this paper, we propose such a method utilizing accurately georeferenced targets covered with high-reflectivity foil, which can be easily extracted from the cloud; their centers can be determined and used for the calculation of the systematic shift of the lidar point cloud. Subsequently, the lidar point cloud is cleaned of such systematic shift and compared with a dense SfM point cloud, thus yielding the residual accuracy. We successfully applied this method to the evaluation of an affordable DJI ZENMUSE L1 scanner mounted on the UAV DJI Matrice 300 and found that the accuracies of this system (3.5 cm in all directions after removal of the global georeferencing error) are better than manufacturer-declared values (10/5 cm horizontal/vertical). However, evaluation of the color information revealed a relatively high (approx. 0.2 m) systematic shift.


Author(s):  
E. M. Farella ◽  
A. Torresani ◽  
F. Remondino

<p><strong>Abstract.</strong> The paper presents an innovative approach for improving the orientation results when terrestrial and UAV images are jointly processed. With the existing approaches, the processing of images coming from different platforms and sensors leads often to noisy and inaccurate 3D reconstructions, due to the different nature and properties of the acquired images. In this work, a photogrammetric pipeline is proposed to filter and remove bad computed tie points, according to some quality feature indicators. A completely automatic procedure has been developed to filter the sparse point cloud, in order to improve the orientation results before computing the dense point cloud. We report some tests and results on a dataset of about 140 images (Modena cathedral, Italy). The effectiveness of the filtering procedure was verified using some internal quality indicators, external checks (ground truth data) and qualitative visual analyses.</p>


2019 ◽  
Vol 11 (10) ◽  
pp. 1188
Author(s):  
Li Zheng ◽  
Yuhao Li ◽  
Meng Sun ◽  
Zheng Ji ◽  
Manzhu Yu ◽  
...  

VLS (Vehicle-borne Laser Scanning) can easily scan the road surface in the close range with high density. UAV (Unmanned Aerial Vehicle) can capture a wider range of ground images. Due to the complementary features of platforms of VLS and UAV, combining the two methods becomes a more effective method of data acquisition. In this paper, a non-rigid method for the aerotriangulation of UAV images assisted by a vehicle-borne light detection and ranging (LiDAR) point cloud is proposed, which greatly reduces the number of control points and improves the automation. We convert the LiDAR point cloud-assisted aerotriangulation into a registration problem between two point clouds, which does not require complicated feature extraction and match between point cloud and images. Compared with the iterative closest point (ICP) algorithm, this method can address the non-rigid image distortion with a more rigorous adjustment model and a higher accuracy of aerotriangulation. The experimental results show that the constraint of the LiDAR point cloud ensures the high accuracy of the aerotriangulation, even in the absence of control points. The root-mean-square error (RMSE) of the checkpoints on the x, y, and z axes are 0.118 m, 0.163 m, and 0.084m, respectively, which verifies the reliability of the proposed method. As a necessary condition for joint mapping, the research based on VLS and UAV images in uncontrolled circumstances will greatly improve the efficiency of joint mapping and reduce its cost.


2021 ◽  
Vol 13 (6) ◽  
pp. 1222
Author(s):  
Gil Gonçalves ◽  
Diogo Gonçalves ◽  
Álvaro Gómez-Gutiérrez ◽  
Umberto Andriolo ◽  
Juan Antonio Pérez-Alvárez

Monitoring the dynamics of coastal cliffs is fundamental for the safety of communities, buildings, utilities, and infrastructures located near the coastline. Structure-from-Motion and Multi View Stereo (SfM-MVS) photogrammetry based on Unmanned Aerial Systems (UAS) is a flexible and cost-effective surveying technique for generating a dense 3D point cloud of the whole cliff face (from bottom to top), with high spatial and temporal resolution. In this paper, in order to generate a reproducible, reliable, precise, accurate, and dense point cloud of the cliff face, a comprehensive analysis of the SfM-MVS processing parameters, image redundancy and acquisition geometry was performed. Using two different UAS, a fixed-wing and a multi-rotor, two flight missions were executed with the aim of reconstructing the geometry of an almost vertical cliff located at the central Portuguese coast. The results indicated that optimizing the processing parameters of Agisoft Metashape can improve the 3D accuracy of the point cloud up to 2 cm. Regarding the image acquisition geometry, the high off-nadir (90°) dataset taken by the multi-rotor generated a denser and more accurate point cloud, with lesser data gaps, than that generated by the low off-nadir dataset (3°) taken by the fixed wing. Yet, it was found that reducing properly the high overlap of the image dataset acquired by the multi-rotor drone permits to get an optimal image dataset, allowing to speed up the processing time without compromising the accuracy and density of the generated point cloud. The analysis and results presented in this paper improve the knowledge required for the 3D reconstruction of coastal cliffs by UAS, providing new insights into the technical aspects needed for optimizing the monitoring surveys.


Author(s):  
C. Serifoglu ◽  
O. Gungor ◽  
V. Yilmaz

Digital Elevation Model (DEM) generation is one of the leading application areas in geomatics. Since a DEM represents the bare earth surface, the very first step of generating a DEM is to separate the ground and non-ground points, which is called ground filtering. Once the point cloud is filtered, the ground points are interpolated to generate the DEM. LiDAR (Light Detection and Ranging) point clouds have been used in many applications thanks to their success in representing the objects they belong to. Hence, in the literature, various ground filtering algorithms have been reported to filter the LiDAR data. Since the LiDAR data acquisition is still a costly process, using point clouds generated from the UAV images to produce DEMs is a reasonable alternative. In this study, point clouds with three different densities were generated from the aerial photos taken from a UAV (Unmanned Aerial Vehicle) to examine the effect of point density on filtering performance. The point clouds were then filtered by means of five different ground filtering algorithms as Progressive Morphological 1D (PM1D), Progressive Morphological 2D (PM2D), Maximum Local Slope (MLS), Elevation Threshold with Expand Window (ETEW) and Adaptive TIN (ATIN). The filtering performance of each algorithm was investigated qualitatively and quantitatively. The results indicated that the ATIN and PM2D algorithms showed the best overall ground filtering performances. The MLS and ETEW algorithms were found as the least successful ones. It was concluded that the point clouds generated from the UAVs can be a good alternative for LiDAR data.


Sign in / Sign up

Export Citation Format

Share Document