Subpixel location of discrete target images in close-range camera calibration: a novel approach

Author(s):  
R. Anchini ◽  
J. A. Beraldin ◽  
C. Liguori
2015 ◽  
Vol 77 (26) ◽  
Author(s):  
Ahmad Razali Yusoff ◽  
Mohd Farid Mohd Ariff ◽  
Khairulnizam M. Idris ◽  
Zulkepli Majid ◽  
Albert K. Chong

All photogrammetric applications need good camera parameters for mapping purpose, such as an Unmanned Aerial Vehicle (UAV) that encompassed with camera devices. Simple camera calibration is commonly used in many experiments in order to obtain the camera parameter’s value. In aerial mapping, interior camera parameters value of close-range camera calibration is used to correct the image error. However, the changes of the interior calibration parameters used need to be considered at different heights of mapping. Therefore, this research aims to contribute by analyzing of camera parameters’ changes from three heights using calibration field, and one camera calibration in the laboratory as commonly used. Camera distance heights of 15 metre, 25 metre, 55 metre, and 1.4 metre camera distance in the laboratory. The results show the changes in camera parameter’s value. Hence, value of camera calibration parameters of a camera is considered different and can change depend on the distance (height) of calibration.


1993 ◽  
Author(s):  
J. A. Beraldin ◽  
Sabry F. El-Hakim ◽  
Luc Cournoyer

Author(s):  
C. Stamatopoulos ◽  
C. S. Fraser

Automated close-range photogrammetric network orientation and camera calibration has traditionally been associated with the use of coded targets in the object space to allow for an initial relative orientation (RO) and subsequent spatial resection of the images. However, over the last decade, advances coming mainly from the computer vision (CV) community have allowed for fully automated orientation via feature-based matching techniques. There are a number of advantages in such methodologies for various types of applications, as well as for cases where the use of artificial targets might be not possible or preferable, for example when attempting calibration from low-level aerial imagery, as with UAVs, or when calibrating long-focal length lenses where small image scales call for inconveniently large coded targets. While there are now a number of CV-based algorithms for multi-image orientation within narrow-baseline networks, with accompanying open-source software, from a photogrammetric standpoint the results are typically disappointing as the metric integrity of the resulting models is generally poor, or even unknown. The objective addressed in this paper is target-free automatic multi-image orientation, maintaining metric integrity, within networks that incorporate wide-baseline imagery. The focus is on both the development of a methodology that overcomes the shortcomings that can be present in current CV algorithms, and on the photogrammetric priorities and requirements that exist in current processing pipelines. This paper also reports on the application of the proposed methodology to automated target-free camera self-calibration and discusses the process via practical examples.


Author(s):  
G. Vacca

<p><strong>Abstract.</strong> In the photogrammetric process of the 3D reconstruction of an object or a building, multi-image orientation is one of the most important tasks that often include simultaneous camera calibration. The accuracy of image orientation and camera calibration significantly affects the quality and accuracy of all subsequent photogrammetric processes, such as determining the spatial coordinates of individual points or 3D modeling. In the context of artificial vision, the full-field analysis procedure is used, which leads to the so-called Strcture from Motion (SfM), which includes the simultaneous determination of the camera's internal and external orientation parameters and the 3D model. The procedures were designed and developed by means of a photogrammetric system, but the greatest development and innovation of these procedures originated from the computer vision from the late 90s, together with the SfM method. The reconstructions on this method have been useful for visualization purposes and not for photogrammetry and mapping. Thanks to advances in computer technology and computer performance, a large number of images can be automatically oriented in a coordinate system arbitrarily defined by different algorithms, often available in open source software (VisualSFM, Bundler, PMVS2, CMVS, etc.) or in the form of Web services (Microsoft Photosynth, Autodesk 123D Catch, My3DScanner, etc.). However, it is important to obtain an assessment of the accuracy and reliability of these automated procedures. This paper presents the results obtained from the dome low close range photogrammetric surveys and processed with some open source software using the Structure from Motion approach: VisualSfM, OpenDroneMap (ODM) and Regard3D. Photogrammetric surveys have also been processed with the Photoscan commercial software by Agisoft.</p><p>For the photogrammetric survey we used the digital camera Canon EOS M3 (24.2 Megapixel, pixel size 3.72&amp;thinsp;mm). We also surveyed the dome with the Faro Focus 3D TLS. Only one scan was carried out, from ground level, at a resolution setting of &amp;frac14; with 3x quality, corresponding to a resolution of 7&amp;thinsp;mm / 10&amp;thinsp;m. Both TLS point cloud and Photoscan point cloud were used as a reference to validate the point clouds coming from VisualSFM, OpenDroneMap and Regards3D. The validation was done using the Cloud Compare open source software.</p>


Sign in / Sign up

Export Citation Format

Share Document