Modeling Hyperhemispherical Points and Calibrating a Dual-Fish-Eye System for Close-Range Applications

2021 ◽  
Vol 87 (5) ◽  
pp. 375-384
Author(s):  
Letícia Ferrari Castanheiro ◽  
Antonio Maria Garcia Tommaselli ◽  
Adilson Berveglieri ◽  
Mariana Batista Campos ◽  
José Marcato Junior

Omnidirectional systems composed of two hyperhemispherical lenses (dual-fish-eye systems) are gaining popularity, but only a few works have studied suitable models for hyperhemispherical lenses and dual-fish-eye calibration. In addition, the effects of using points in the hyperhemispherical field of view in photogrammetric procedures have not been addressed. This article presents a comparative analysis of the fish-eye models (equidistant, equisolid-angle, stereographic, and orthogonal) for hyperhemispherical-lens and dual-fish-eye calibration techniques. The effects of adding points beyond 180° field of view in dual-fish-eye calibration using stability constraints of relative orientation parameters are also assessed. The experiments were performed with the Ricoh Theta dual-fish-eye system, which is composed of fish-eye lenses with a field of view of approximately 190° each. The equisolid-angle model presented the best results in the simultaneous calibration experiments. An accuracy of approximately one pixel in the object space units was achieved, showing the potential of the proposed approach for close-range applications.

Author(s):  
B. Elnashef ◽  
S. Filin

Abstract. While accuracy, detail, and limited time on site make photogrammetry a valuable means for underwater mapping, the establishment of reference control networks in such settings is oftentimes difficult. In that respect, the use of the coplanarity constraint becomes a valuable solution as it requires neither knowledge of object space coordinates nor setting a reference control network. Nonetheless, imaging in such domains is subjected to non-linear and depth-dependent distortions, which are caused by refractive media that alter the standard single viewpoint geometry. Accordingly, the coplanarity relation, as formulated for the in-air case does not hold in such environment and methods that have been proposed thus far for geometrical modeling of its effect require knowledge of object-space quantities. In this paper we propose a geometrically-driven approach which fulfills the coplanarity condition and thereby requires no knowledge of object space data. We also study a linear model for the establishment of this constraints. Clearly, a linear form requires neither first approximations nor iterative convergence scheme. Such an approach may prove useful not only for object space reconstruction but also as a preparatory step for application of bundle block adjustment and for outlier detection. All are key features in photogrammetric practices. Results show that no unique setup is needed for estimating the relative orientation parameters using the model and that high levels of accuracy can be achieved.


2015 ◽  
Vol 21 (3) ◽  
pp. 637-651 ◽  
Author(s):  
José Marcato Junior ◽  
Marcus Vinícius Antunes de Moraes ◽  
Antonio Maria Garcia Tommaselli

Abstract:Fisheye lens cameras enable to increase the Field of View (FOV), and consequently they have been largely used in several applications like robotics. The use of this type of cameras in close-range Photogrammetry for high accuracy applications, requires rigorous calibration. The main aim of this work is to present the calibration results of a Fuji Finepix S3PRO camera with Samyang 8mm fisheye lens using rigorous mathematical models. Mathematical models based on Perspective, Stereo-graphic, Equi-distant, Orthogonal and Equi-solid-angle projections were implemented and used in the experiments. The fisheye lenses are generally designed following one of the last four models, and Bower-Samyang 8mm lens is based on Stereo-graphic projection. These models were used in combination with symmetric radial, decentering and affinity distortion models. Experiments were performed to verify which set of IOPs (Interior Orientation Parameters) presented better results to describe the camera inner geometry. Collinearity mathematical model, which is based on perspective projection, presented the less accurate results, which was expected because fisheye lenses are not designed following the perspective projection. Stereo-graphic, Equi-distant, Orthogonal and Equi-solid-angle projections presented similar results even considering that Bower-Samyang fisheye lens was built based on Stereo-graphic projection. The experimental results also demonstrated a small correlation between IOPs and EOPs (Exterior Orientation Parameters) for Bower-Samyang lens.


Author(s):  
O. Kahmen ◽  
R. Rofallski ◽  
N. Conen ◽  
T. Luhmann

<p><strong>Abstract.</strong> In multimedia photogrammetry, multi-camera systems often provide scale by a calibrated relative orientation. Camera calibration via bundle adjustment is a well-established standard procedure in single-medium photogrammetry. When using standard software and applying the collinearity equations in multimedia photogrammetry, the refractive interfaces are modelled in an implicit form. This contribution analyses different calibration strategies for bundle-invariant interfaces. To evaluate the effects of implicitly modelling the refractive effects within a bundle adjustment, synthetic datasets are simulated. Contrary to many publications, systematic effects of the exterior orientations can be verified with simulated data. The behaviour of interior, exterior and relative orientation parameters is analysed using error-free synthetic datasets. The relative orientation of a stereo camera shows systematic effects, when the angle of convergence varies and when the synthetic interface is set up at different distances to the camera. It becomes clear, that in most cases the implicit modelling is not suitable for multimedia photogrammetry. An explicit modelling of the refractive interfaces is implemented into a bundle adjustment. This strict model is analysed and compared with the implicit form regarding systematic effects in orientation parameters as well as errors in object space. In a real experiment, the discrepancies between the implicit form using standard software and the explicit modelling using our own implementation are quantified. It is highly advisable to model the interfaces strictly, since the implicit modelling might lead to relevant errors in object space.</p>


2021 ◽  
Vol 13 (9) ◽  
pp. 1852
Author(s):  
Yiren Wang ◽  
Dong Liu ◽  
Wanyi Xie ◽  
Ming Yang ◽  
Zhenyu Gao ◽  
...  

The formation and evolution of clouds are associated with their thermodynamical and microphysical progress. Previous studies have been conducted to collect images using ground-based cloud observation equipment to provide important cloud characteristics information. However, most of this equipment cannot perform continuous observations during the day and night, and their field of view (FOV) is also limited. To address these issues, this work proposes a day and night clouds detection approach integrated into a self-made thermal-infrared (TIR) all-sky-view camera. The TIR camera consists of a high-resolution thermal microbolometer array and a fish-eye lens with a FOV larger than 160°. In addition, a detection scheme was designed to directly subtract the contamination of the atmospheric TIR emission from the entire infrared image of such a large FOV, which was used for cloud recognition. The performance of this scheme was validated by comparing the cloud fractions retrieved from the infrared channel with those from the visible channel and manual observation. The results indicated that the current instrument could obtain accurate cloud fraction from the observed infrared image, and the TIR all-sky-view camera developed in this work exhibits good feasibility for long-term and continuous cloud observation.


Author(s):  
S. Rhee ◽  
T. Kim

3D spatial information from unmanned aerial vehicles (UAV) images is usually provided in the form of 3D point clouds. For various UAV applications, it is important to generate dense 3D point clouds automatically from over the entire extent of UAV images. In this paper, we aim to apply image matching for generation of local point clouds over a pair or group of images and global optimization to combine local point clouds over the whole region of interest. We tried to apply two types of image matching, an object space-based matching technique and an image space-based matching technique, and to compare the performance of the two techniques. The object space-based matching used here sets a list of candidate height values for a fixed horizontal position in the object space. For each height, its corresponding image point is calculated and similarity is measured by grey-level correlation. The image space-based matching used here is a modified relaxation matching. We devised a global optimization scheme for finding optimal pairs (or groups) to apply image matching, defining local match region in image- or object- space, and merging local point clouds into a global one. For optimal pair selection, tiepoints among images were extracted and stereo coverage network was defined by forming a maximum spanning tree using the tiepoints. From experiments, we confirmed that through image matching and global optimization, 3D point clouds were generated successfully. However, results also revealed some limitations. In case of image-based matching results, we observed some blanks in 3D point clouds. In case of object space-based matching results, we observed more blunders than image-based matching ones and noisy local height variations. We suspect these might be due to inaccurate orientation parameters. The work in this paper is still ongoing. We will further test our approach with more precise orientation parameters.


2020 ◽  
Vol 12 (14) ◽  
pp. 2268
Author(s):  
Tian Zhou ◽  
Seyyed Meghdad Hasheminasab ◽  
Radhika Ravi ◽  
Ayman Habib

Unmanned aerial vehicles (UAVs) are quickly emerging as a popular platform for 3D reconstruction/modeling in various applications such as precision agriculture, coastal monitoring, and emergency management. For such applications, LiDAR and frame cameras are the two most commonly used sensors for 3D mapping of the object space. For example, point clouds for the area of interest can be directly derived from LiDAR sensors onboard UAVs equipped with integrated global navigation satellite systems and inertial navigation systems (GNSS/INS). Imagery-based mapping, on the other hand, is considered to be a cost-effective and practical option and is often conducted by generating point clouds and orthophotos using structure from motion (SfM) techniques. Mapping with photogrammetric approaches requires accurate camera interior orientation parameters (IOPs), especially when direct georeferencing is utilized. Most state-of-the-art approaches for determining/refining camera IOPs depend on ground control points (GCPs). However, establishing GCPs is expensive and labor-intensive, and more importantly, the distribution and number of GCPs are usually less than optimal to provide adequate control for determining and/or refining camera IOPs. Moreover, consumer-grade cameras with unstable IOPs have been widely used for mapping applications. Therefore, in such scenarios, where frequent camera calibration or IOP refinement is required, GCP-based approaches are impractical. To eliminate the need for GCPs, this study uses LiDAR data as a reference surface to perform in situ refinement of camera IOPs. The proposed refinement strategy is conducted in three main steps. An image-based sparse point cloud is first generated via a GNSS/INS-assisted SfM strategy. Then, LiDAR points corresponding to the resultant image-based sparse point cloud are identified through an iterative plane fitting approach and are referred to as LiDAR control points (LCPs). Finally, IOPs of the utilized camera are refined through a GNSS/INS-assisted bundle adjustment procedure using LCPs. Seven datasets over two study sites with a variety of geomorphic features are used to evaluate the performance of the developed strategy. The results illustrate the ability of the proposed approach to achieve an object space absolute accuracy of 3–5 cm (i.e., 5–10 times the ground sampling distance) at a 41 m flying height.


2003 ◽  
Author(s):  
Joseph L. Moorhouse ◽  
John J. Barnett ◽  
Karim Djotni ◽  
Christopher L. Hepplewhite ◽  
Christopher W. P. Palmer ◽  
...  

2017 ◽  
Vol 43 (2) ◽  
pp. 66-72 ◽  
Author(s):  
Khalid L. A. EL-ASHMAWY

The present work tests the suitability of using the digital cameras of smart phones for close range photogrammetry applications. For this purpose two cameras of smart phones Lumia 535 and Lumia 950 XL were used. The resolutions of the two cameras are 5 and 20 Mpixels respectively. The tests consist of (a) self calibration of the two cameras, (b) the implementation of close-range photogrammetry using the cameras of the two smart phones, theodolite intersection with LST method, and linear variable displacement transducers (LVDTs) for the measurement of vertical deflections, and (c) accuracy of photogrammetric determination of object space coordinates. The results of using Lumia 950 XL are much better than using Lumia 535 and are better or comparable to the results of theodolite intersection with least squares technique (LST). Finally, it can be stated that the digital cameras of smart phones are suitable for close range photogrammetry applications according to accuracy, costs and flexibility.


Author(s):  
G. Blott ◽  
C. Heipke

This work presents an approach for the task of person re-identification by exploiting bifocal stereo cameras. Present monocular person re-identification approaches show a decreasing working distance, when increasing the image resolution to obtain a higher reidentification performance. We propose a novel 3D multipath bifocal approach, containing a rectilinear lens with larger focal length for long range distances and a fish eye lens of a smaller focal length for the near range. The person re-identification performance is at least on par with 2D re-identification approaches but the working distance of the approach is increased and on average 10% more re-identification performance can be achieved in the overlapping field of view compared to a single camera. In addition, the 3D information is exploited from the overlapping field of view to solve potential 2D ambiguities.


Author(s):  
C. Stamatopoulos ◽  
C. S. Fraser

Automated close-range photogrammetric network orientation and camera calibration has traditionally been associated with the use of coded targets in the object space to allow for an initial relative orientation (RO) and subsequent spatial resection of the images. However, over the last decade, advances coming mainly from the computer vision (CV) community have allowed for fully automated orientation via feature-based matching techniques. There are a number of advantages in such methodologies for various types of applications, as well as for cases where the use of artificial targets might be not possible or preferable, for example when attempting calibration from low-level aerial imagery, as with UAVs, or when calibrating long-focal length lenses where small image scales call for inconveniently large coded targets. While there are now a number of CV-based algorithms for multi-image orientation within narrow-baseline networks, with accompanying open-source software, from a photogrammetric standpoint the results are typically disappointing as the metric integrity of the resulting models is generally poor, or even unknown. The objective addressed in this paper is target-free automatic multi-image orientation, maintaining metric integrity, within networks that incorporate wide-baseline imagery. The focus is on both the development of a methodology that overcomes the shortcomings that can be present in current CV algorithms, and on the photogrammetric priorities and requirements that exist in current processing pipelines. This paper also reports on the application of the proposed methodology to automated target-free camera self-calibration and discusses the process via practical examples.


Sign in / Sign up

Export Citation Format

Share Document