scholarly journals Automated Target-Free Network Orienation and Camera Calibration

Author(s):  
C. Stamatopoulos ◽  
C. S. Fraser

Automated close-range photogrammetric network orientation and camera calibration has traditionally been associated with the use of coded targets in the object space to allow for an initial relative orientation (RO) and subsequent spatial resection of the images. However, over the last decade, advances coming mainly from the computer vision (CV) community have allowed for fully automated orientation via feature-based matching techniques. There are a number of advantages in such methodologies for various types of applications, as well as for cases where the use of artificial targets might be not possible or preferable, for example when attempting calibration from low-level aerial imagery, as with UAVs, or when calibrating long-focal length lenses where small image scales call for inconveniently large coded targets. While there are now a number of CV-based algorithms for multi-image orientation within narrow-baseline networks, with accompanying open-source software, from a photogrammetric standpoint the results are typically disappointing as the metric integrity of the resulting models is generally poor, or even unknown. The objective addressed in this paper is target-free automatic multi-image orientation, maintaining metric integrity, within networks that incorporate wide-baseline imagery. The focus is on both the development of a methodology that overcomes the shortcomings that can be present in current CV algorithms, and on the photogrammetric priorities and requirements that exist in current processing pipelines. This paper also reports on the application of the proposed methodology to automated target-free camera self-calibration and discusses the process via practical examples.

Author(s):  
G. Vacca

<p><strong>Abstract.</strong> In the photogrammetric process of the 3D reconstruction of an object or a building, multi-image orientation is one of the most important tasks that often include simultaneous camera calibration. The accuracy of image orientation and camera calibration significantly affects the quality and accuracy of all subsequent photogrammetric processes, such as determining the spatial coordinates of individual points or 3D modeling. In the context of artificial vision, the full-field analysis procedure is used, which leads to the so-called Strcture from Motion (SfM), which includes the simultaneous determination of the camera's internal and external orientation parameters and the 3D model. The procedures were designed and developed by means of a photogrammetric system, but the greatest development and innovation of these procedures originated from the computer vision from the late 90s, together with the SfM method. The reconstructions on this method have been useful for visualization purposes and not for photogrammetry and mapping. Thanks to advances in computer technology and computer performance, a large number of images can be automatically oriented in a coordinate system arbitrarily defined by different algorithms, often available in open source software (VisualSFM, Bundler, PMVS2, CMVS, etc.) or in the form of Web services (Microsoft Photosynth, Autodesk 123D Catch, My3DScanner, etc.). However, it is important to obtain an assessment of the accuracy and reliability of these automated procedures. This paper presents the results obtained from the dome low close range photogrammetric surveys and processed with some open source software using the Structure from Motion approach: VisualSfM, OpenDroneMap (ODM) and Regard3D. Photogrammetric surveys have also been processed with the Photoscan commercial software by Agisoft.</p><p>For the photogrammetric survey we used the digital camera Canon EOS M3 (24.2 Megapixel, pixel size 3.72&amp;thinsp;mm). We also surveyed the dome with the Faro Focus 3D TLS. Only one scan was carried out, from ground level, at a resolution setting of &amp;frac14; with 3x quality, corresponding to a resolution of 7&amp;thinsp;mm / 10&amp;thinsp;m. Both TLS point cloud and Photoscan point cloud were used as a reference to validate the point clouds coming from VisualSFM, OpenDroneMap and Regards3D. The validation was done using the Cloud Compare open source software.</p>


2021 ◽  
Vol 2 (1 (110)) ◽  
pp. 37-43
Author(s):  
Lateef Abd Zaid Qudr

Three-dimensional (3D) information of capturing and reconstructing an object existing in its environment is a big challenge. In this work, we discuss the 3D laser scanning techniques, which can obtain a high density of data points by an accurate and fast method. This work considers the previous developments in this area to propose a developed cost-effective system based on pinhole projection concept and commercial hardware components taking into account the current achieved accuracy. A laser line auto-scanning system was designed to perform close-range 3D reconstructions for home/office objects with high accuracy and resolution. The system changes the laser plane direction with a microcontroller to perform automatic scanning and obtain continuous laser strips for objects’ 3D reconstruction. The system parameters were calibrated with Matlab’s built-in camera calibration toolbox to find camera focal length and optical center constraints. The pinhole projection equation was defined to optimize the prototype rotating axis equation. The developed 3D environmental laser scanner with pinhole projection proved the system’s effectiveness on close-range stationary objects with high resolution and accuracy with a measurement error in the range (0.05–0.25) mm. The 3D point cloud processing of the Matlab computer vision toolbox has been employed to show the 3D object reconstruction and to perform the camera calibration, which improves efficiency and highly simplifies the calibration method. The calibration error is the main error source in the measurements, and the errors of the actual measurement are found to be influenced by several environmental parameters. The presented platform can be equipped with a system of lower power consumption, and compact smaller size


2021 ◽  
Vol 318 ◽  
pp. 04005
Author(s):  
Tariq N. Ataiwe ◽  
Israa Hatem ◽  
Hisham M. J. Al Sharaa

Smartphones recently expanded the potential for low-cost close-range photogrammetry for 3D modeling. They enable the simultaneous collection of large amounts of data for a variety of requirements. It is possible to calculate image orientation elements and triangular coordinates in phases as in Relative and Absolute image orientation. This study demonstrates the photogrammetric 3D reconstruction approach that performs on tablets and smartphones as well. Images are taken with smartphone cameras of iPhone 6 and then calibrated automatically using normal calibration model for photogrammetry and computer vision on a PC, depend on Agisoft Lens add-on that imbedded in Agisoft program, and MATLAB camera calibration Toolbox, and by using an oriented bunch of images of chessboard pattern for large point cloud-based picture using matching. The camera calibration results indicate that the calibration processing routines pass without any error, and the accuracy of estimated IOPs was convenient compared with non-metric digital cameras and are more accurate in Agisoft Lens in terms of standard error. For the 3D model, 435 cameras were used, 428 cameras located from 435 are aligned in two photogrammetric software, Agisoft PhotoScan, and LPS. The number of tie points that are used in LPS is 10 tie points, and 4 control points which used to estimate the EOPs, and the number of tie points that are regenerated in Agisoft PhotoScan were 135.605 points, the number of Dense cloud 3,716,912 points are generated, for 3D model a number of 316,253 faces are generated, after processing the tiled model generated (6 levels, 1.25 cm/pix), the generated DEM having (2136×1774/pix), the dimensions of the generated high-resolution orthomosaic are (5520×4494, 4.47 cm/pix). For accuracy assessment, the Xerr. = 0.292 m, Yerr. = 0.38577 m, Zerr.= 0.2889 m, and the total RMS = 0.563 m in the estimated locations of the exterior orientation parameters.


Author(s):  
L. Barazzetti ◽  
R. Brumana ◽  
D. Oreni ◽  
M. Previtali ◽  
F. Roncoroni

This paper presents a photogrammetric methodology for true-orthophoto generation with images acquired from UAV platforms. The method is an automated multistep workflow made up of three main parts: (i) image orientation through feature-based matching and collinearity equations / bundle block adjustment, (ii) dense matching with correlation techniques able to manage multiple images, and true-orthophoto mapping for 3D model texturing. It allows automated data processing of sparse blocks of convergent images in order to obtain a final true-orthophoto where problems such as self-occlusions, ghost effects, and multiple texture assignments are taken into consideration. <br><br> The different algorithms are illustrated and discussed along with a real case study concerning the UAV flight over the Basilica di Santa Maria di Collemaggio in L'Aquila (Italy). The final result is a rigorous true-orthophoto used to inspect the roof of the Basilica, which was seriously damaged by the earthquake in 2009.


Sensors ◽  
2020 ◽  
Vol 20 (20) ◽  
pp. 5934
Author(s):  
Xiao Li ◽  
Wei Li ◽  
Xin’an Yuan ◽  
Xiaokang Yin ◽  
Xin Ma

Lens distortion is closely related to the spatial position of depth of field (DoF), especially in close-range photography. The accurate characterization and precise calibration of DoF-dependent distortion are very important to improve the accuracy of close-range vision measurements. In this paper, to meet the need of short-distance and small-focal-length photography, a DoF-dependent and equal-partition based lens distortion modeling and calibration method is proposed. Firstly, considering the direction along the optical axis, a DoF-dependent yet focusing-state-independent distortion model is proposed. By this method, manual adjustment of the focus and zoom rings is avoided, thus eliminating human errors. Secondly, considering the direction perpendicular to the optical axis, to solve the problem of insufficient distortion representations caused by using only one set of coefficients, a 2D-to-3D equal-increment partitioning method for lens distortion is proposed. Accurate characterization of DoF-dependent distortion is thus realized by fusing the distortion partitioning method and the DoF distortion model. Lastly, a calibration control field is designed. After extracting line segments within a partition, the de-coupling calibration of distortion parameters and other camera model parameters is realized. Experiment results shows that the maximum/average projection and angular reconstruction errors of equal-increment partition based DoF distortion model are 0.11 pixels/0.05 pixels and 0.013°/0.011°, respectively. This demonstrates the validity of the lens distortion model and calibration method proposed in this paper.


Sign in / Sign up

Export Citation Format

Share Document