scholarly journals OVERVIEW OF OPEN SOURCE SOFTWARE FOR CLOSE RANGE PHOTOGRAMMETRY

Author(s):  
G. Vacca

<p><strong>Abstract.</strong> In the photogrammetric process of the 3D reconstruction of an object or a building, multi-image orientation is one of the most important tasks that often include simultaneous camera calibration. The accuracy of image orientation and camera calibration significantly affects the quality and accuracy of all subsequent photogrammetric processes, such as determining the spatial coordinates of individual points or 3D modeling. In the context of artificial vision, the full-field analysis procedure is used, which leads to the so-called Strcture from Motion (SfM), which includes the simultaneous determination of the camera's internal and external orientation parameters and the 3D model. The procedures were designed and developed by means of a photogrammetric system, but the greatest development and innovation of these procedures originated from the computer vision from the late 90s, together with the SfM method. The reconstructions on this method have been useful for visualization purposes and not for photogrammetry and mapping. Thanks to advances in computer technology and computer performance, a large number of images can be automatically oriented in a coordinate system arbitrarily defined by different algorithms, often available in open source software (VisualSFM, Bundler, PMVS2, CMVS, etc.) or in the form of Web services (Microsoft Photosynth, Autodesk 123D Catch, My3DScanner, etc.). However, it is important to obtain an assessment of the accuracy and reliability of these automated procedures. This paper presents the results obtained from the dome low close range photogrammetric surveys and processed with some open source software using the Structure from Motion approach: VisualSfM, OpenDroneMap (ODM) and Regard3D. Photogrammetric surveys have also been processed with the Photoscan commercial software by Agisoft.</p><p>For the photogrammetric survey we used the digital camera Canon EOS M3 (24.2 Megapixel, pixel size 3.72&amp;thinsp;mm). We also surveyed the dome with the Faro Focus 3D TLS. Only one scan was carried out, from ground level, at a resolution setting of &amp;frac14; with 3x quality, corresponding to a resolution of 7&amp;thinsp;mm / 10&amp;thinsp;m. Both TLS point cloud and Photoscan point cloud were used as a reference to validate the point clouds coming from VisualSFM, OpenDroneMap and Regards3D. The validation was done using the Cloud Compare open source software.</p>

Author(s):  
C. Stamatopoulos ◽  
C. S. Fraser

Automated close-range photogrammetric network orientation and camera calibration has traditionally been associated with the use of coded targets in the object space to allow for an initial relative orientation (RO) and subsequent spatial resection of the images. However, over the last decade, advances coming mainly from the computer vision (CV) community have allowed for fully automated orientation via feature-based matching techniques. There are a number of advantages in such methodologies for various types of applications, as well as for cases where the use of artificial targets might be not possible or preferable, for example when attempting calibration from low-level aerial imagery, as with UAVs, or when calibrating long-focal length lenses where small image scales call for inconveniently large coded targets. While there are now a number of CV-based algorithms for multi-image orientation within narrow-baseline networks, with accompanying open-source software, from a photogrammetric standpoint the results are typically disappointing as the metric integrity of the resulting models is generally poor, or even unknown. The objective addressed in this paper is target-free automatic multi-image orientation, maintaining metric integrity, within networks that incorporate wide-baseline imagery. The focus is on both the development of a methodology that overcomes the shortcomings that can be present in current CV algorithms, and on the photogrammetric priorities and requirements that exist in current processing pipelines. This paper also reports on the application of the proposed methodology to automated target-free camera self-calibration and discusses the process via practical examples.


Author(s):  
V. Baiocchi ◽  
M. Barbarella ◽  
S. Del Pizzo ◽  
F. Giannone ◽  
S. Troisi ◽  
...  

A photogrammetric survey of a unique archaeological site is reported in this paper. The survey was performed using both a panoramic image-based solution and by classical procedure. The panoramic image-based solution was carried out employing a commercial solution: the Trimble V10 Imaging Rover (IR). Such instrument is an integrated cameras system that captures 360 degrees digital panoramas, composed of 12 images, with a single push. The direct comparison of the point clouds obtained with traditional photogrammetric procedure and V10 stations, using the same GCP coordinates has been carried out in Cloud Compare, open source software that can provide the comparison between two point clouds supplied by all the main statistical data. The site is a portion of the dial plate of the “Horologium Augusti” inaugurated in 9 B.C.E. in the area of Campo Marzio and still present intact in the same position, in a cellar of a building in Rome, around 7 meter below the present ground level.


2021 ◽  
Author(s):  
Ali Mirzazade ◽  
Cosmin Popescu ◽  
Thomas Blanksvärd ◽  
Björn Täljsten

<p>In bridge inspection, vertical displacement is a relevant parameter for both short and long-term health monitoring. Assessing change in deflections could also simplify the assessment work for inspectors. Recent developments in digital camera technology and photogrammetry software enables point cloud with colour information (RGB values) to be generated. Thus, close range photogrammetry offers the potential of monitoring big and small-scale damages by point clouds. The current paper aims to monitor geometrical deviations in Pahtajokk Bridge, Northern Sweden, using an optical data acquisition technique. The bridge in this study is scanned two times by almost one year a part. After point cloud generation the datasets were compared to detect geometrical deviations. First scanning was carried out by both close range photogrammetry (CRP) and terrestrial laser scanning (TLS), while second scanning was performed by CRP only. Analyzing the results has shown the potential of CRP in bridge inspection.</p>


2019 ◽  
Vol 11 (18) ◽  
pp. 2154 ◽  
Author(s):  
Ján Šašak ◽  
Michal Gallay ◽  
Ján Kaňuk ◽  
Jaroslav Hofierka ◽  
Jozef Minár

Airborne and terrestrial laser scanning and close-range photogrammetry are frequently used for very high-resolution mapping of land surface. These techniques require a good strategy of mapping to provide full visibility of all areas otherwise the resulting data will contain areas with no data (data shadows). Especially, deglaciated rugged alpine terrain with abundant large boulders, vertical rock faces and polished roche-moutones surfaces complicated by poor accessibility for terrestrial mapping are still a challenge. In this paper, we present a novel methodological approach based on a combined use of terrestrial laser scanning (TLS) and close-range photogrammetry from an unmanned aerial vehicle (UAV) for generating a high-resolution point cloud and digital elevation model (DEM) of a complex alpine terrain. The approach is demonstrated using a small study area in the upper part of a deglaciated valley in the Tatry Mountains, Slovakia. The more accurate TLS point cloud was supplemented by the UAV point cloud in areas with insufficient TLS data coverage. The accuracy of the iterative closest point adjustment of the UAV and TLS point clouds was in the order of several centimeters but standard deviation of the mutual orientation of TLS scans was in the order of millimeters. The generated high-resolution DEM was compared to SRTM DEM, TanDEM-X and national DMR3 DEM products confirming an excellent applicability in a wide range of geomorphologic applications.


Author(s):  
M. Dahaghin ◽  
F. Samadzadegan ◽  
F. Dadras Javan

Abstract. Thermography is a robust method for detecting thermal irregularities on the roof of the buildings as one of the main energy dissipation parts. Recently, UAVs are presented to be useful in gathering 3D thermal data of the building roofs. In this topic, the low spatial resolution of thermal imagery is a challenge which leads to a sparse resolution in point clouds. This paper suggests the fusion of visible and thermal point clouds to generate a high-resolution thermal point cloud of the building roofs. For the purpose, camera calibration is performed to obtain internal orientation parameters, and then thermal point clouds and visible point clouds are generated. In the next step, both two point clouds are geo-referenced by control points. To extract building roofs from the visible point cloud, CSF ground filtering is applied, and the vegetation layer is removed by RGBVI index. Afterward, a predefined threshold is applied to the normal vectors in the z-direction in order to separate facets of roofs from the walls. Finally, the visible point cloud of the building roofs and registered thermal point cloud are combined and generate a fused dense point cloud. Results show mean re-projection error of 0.31 pixels for thermal camera calibration and mean absolute distance of 0.2 m for point clouds registration. The final product is a fused point cloud, which its density improves up to twice of the initial thermal point cloud density and it has the spatial accuracy of visible point cloud along with thermal information of the building roofs.


2019 ◽  
Vol 11 (10) ◽  
pp. 1188
Author(s):  
Li Zheng ◽  
Yuhao Li ◽  
Meng Sun ◽  
Zheng Ji ◽  
Manzhu Yu ◽  
...  

VLS (Vehicle-borne Laser Scanning) can easily scan the road surface in the close range with high density. UAV (Unmanned Aerial Vehicle) can capture a wider range of ground images. Due to the complementary features of platforms of VLS and UAV, combining the two methods becomes a more effective method of data acquisition. In this paper, a non-rigid method for the aerotriangulation of UAV images assisted by a vehicle-borne light detection and ranging (LiDAR) point cloud is proposed, which greatly reduces the number of control points and improves the automation. We convert the LiDAR point cloud-assisted aerotriangulation into a registration problem between two point clouds, which does not require complicated feature extraction and match between point cloud and images. Compared with the iterative closest point (ICP) algorithm, this method can address the non-rigid image distortion with a more rigorous adjustment model and a higher accuracy of aerotriangulation. The experimental results show that the constraint of the LiDAR point cloud ensures the high accuracy of the aerotriangulation, even in the absence of control points. The root-mean-square error (RMSE) of the checkpoints on the x, y, and z axes are 0.118 m, 0.163 m, and 0.084m, respectively, which verifies the reliability of the proposed method. As a necessary condition for joint mapping, the research based on VLS and UAV images in uncontrolled circumstances will greatly improve the efficiency of joint mapping and reduce its cost.


Author(s):  
V. Petras ◽  
A. Petrasova ◽  
J. Jeziorska ◽  
H. Mitasova

Today’s methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV) imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM), and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM). Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL), Point Cloud Library (PCL), and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.


Author(s):  
D. Costantino ◽  
M. G. Angelini ◽  
F. Settembrini

The paper presents a software dedicated to the elaboration of point clouds, called <i>Intelligent Cloud Viewer</i> (ICV), made in-house by AESEI software (Spin-Off of Politecnico di Bari), allowing to view point cloud of several tens of millions of points, also on of “no” very high performance systems. The elaborations are carried out on the whole point cloud and managed by means of the display only part of it in order to speed up rendering. It is designed for 64-bit Windows and is fully written in C ++ and integrates different specialized modules for computer graphics (Open Inventor by SGI, Silicon Graphics Inc), maths (BLAS, EIGEN), computational geometry (CGAL, <i>Computational Geometry Algorithms Library</i>), registration and advanced algorithms for point clouds (PCL, <i>Point Cloud Library</i>), advanced data structures (BOOST, <i>Basic Object Oriented Supporting Tools</i>), etc. ICV incorporates a number of features such as, for example, cropping, transformation and georeferencing, matching, registration, decimation, sections, distances calculation between clouds, etc. It has been tested on photographic and TLS (<i>Terrestrial Laser Scanner</i>) data, obtaining satisfactory results. The potentialities of the software have been tested by carrying out the photogrammetric survey of the Castel del Monte which was already available in previous laser scanner survey made from the ground by the same authors. For the aerophotogrammetric survey has been adopted a flight height of approximately 1000ft AGL (<i>Above Ground Level</i>) and, overall, have been acquired over 800 photos in just over 15 minutes, with a covering not less than 80%, the planned speed of about 90 knots.


Author(s):  
M. Zacharek ◽  
P. Delis ◽  
M. Kedzierski ◽  
A. Fryskowska

These studies have been conductedusing non-metric digital camera and dense image matching algorithms, as non-contact methods of creating monuments documentation.In order toprocess the imagery, few open-source software and algorithms of generating adense point cloud from images have been executed. In the research, the OSM Bundler, VisualSFM software, and web application ARC3D were used. Images obtained for each of the investigated objects were processed using those applications, and then dense point clouds and textured 3D models were created. As a result of post-processing, obtained models were filtered and scaled.The research showedthat even using the open-source software it is possible toobtain accurate 3D models of structures (with an accuracy of a few centimeters), but for the purpose of documentation and conservation of cultural and historical heritage, such accuracy can be insufficient.


Sign in / Sign up

Export Citation Format

Share Document