scholarly journals ON SCALE DEFINITION WITHIN CALIBRATION OF MULTI-CAMERA SYSTEMS IN MULTIMEDIA PHOTOGRAMMETRY

Author(s):  
O. Kahmen ◽  
R. Rofallski ◽  
N. Conen ◽  
T. Luhmann

<p><strong>Abstract.</strong> In multimedia photogrammetry, multi-camera systems often provide scale by a calibrated relative orientation. Camera calibration via bundle adjustment is a well-established standard procedure in single-medium photogrammetry. When using standard software and applying the collinearity equations in multimedia photogrammetry, the refractive interfaces are modelled in an implicit form. This contribution analyses different calibration strategies for bundle-invariant interfaces. To evaluate the effects of implicitly modelling the refractive effects within a bundle adjustment, synthetic datasets are simulated. Contrary to many publications, systematic effects of the exterior orientations can be verified with simulated data. The behaviour of interior, exterior and relative orientation parameters is analysed using error-free synthetic datasets. The relative orientation of a stereo camera shows systematic effects, when the angle of convergence varies and when the synthetic interface is set up at different distances to the camera. It becomes clear, that in most cases the implicit modelling is not suitable for multimedia photogrammetry. An explicit modelling of the refractive interfaces is implemented into a bundle adjustment. This strict model is analysed and compared with the implicit form regarding systematic effects in orientation parameters as well as errors in object space. In a real experiment, the discrepancies between the implicit form using standard software and the explicit modelling using our own implementation are quantified. It is highly advisable to model the interfaces strictly, since the implicit modelling might lead to relevant errors in object space.</p>

2020 ◽  
Vol 12 (12) ◽  
pp. 2057
Author(s):  
Oliver Kahmen ◽  
Robin Rofallski ◽  
Thomas Luhmann

Camera calibration via bundle adjustment is a well-established standard procedure in single-medium photogrammetry. When using standard software and applying the collinearity equations in multimedia photogrammetry, the effects of refractive interfaces are compensated in an implicit form, hence by the usual parameters of interior orientation. This contribution analyses different calibration strategies for planar bundle-invariant interfaces. To evaluate the effects of implicitly modelling the refractive effects within bundle adjustment, synthetic error-free datasets are simulated. The behaviour of interior, exterior, and relative orientation parameters is analysed using synthetic datasets free of underwater imaging effects. A shift of the camera positions of 0.2% of the acquisition distance along the optical axis can be observed. The relative orientation of a stereo camera shows systematic effects when the angle of convergence varies. The stereo baseline increases by 1% at 25° convergence. Furthermore, the interface is set up at different distances to the camera. When the interface is at 50% distance assuming a parallel camera setup, the stereo baseline also increases by 1%. It becomes clear that in most cases the implicit modelling is not suitable for multimedia photogrammetry due to geometrical errors (scaling) and absolute positioning errors. Explicit modelling of the refractive interfaces is implemented into a bundle adjustment and is also used to analyse calibration parameters and deviations in object space. Real experiments show that it is difficult to separate the effects of implicit modelling, since other effects, such as poor image measurements, affect the final result. However, trends can be seen, and deviations are quantified.


2020 ◽  
Vol 12 (14) ◽  
pp. 2268
Author(s):  
Tian Zhou ◽  
Seyyed Meghdad Hasheminasab ◽  
Radhika Ravi ◽  
Ayman Habib

Unmanned aerial vehicles (UAVs) are quickly emerging as a popular platform for 3D reconstruction/modeling in various applications such as precision agriculture, coastal monitoring, and emergency management. For such applications, LiDAR and frame cameras are the two most commonly used sensors for 3D mapping of the object space. For example, point clouds for the area of interest can be directly derived from LiDAR sensors onboard UAVs equipped with integrated global navigation satellite systems and inertial navigation systems (GNSS/INS). Imagery-based mapping, on the other hand, is considered to be a cost-effective and practical option and is often conducted by generating point clouds and orthophotos using structure from motion (SfM) techniques. Mapping with photogrammetric approaches requires accurate camera interior orientation parameters (IOPs), especially when direct georeferencing is utilized. Most state-of-the-art approaches for determining/refining camera IOPs depend on ground control points (GCPs). However, establishing GCPs is expensive and labor-intensive, and more importantly, the distribution and number of GCPs are usually less than optimal to provide adequate control for determining and/or refining camera IOPs. Moreover, consumer-grade cameras with unstable IOPs have been widely used for mapping applications. Therefore, in such scenarios, where frequent camera calibration or IOP refinement is required, GCP-based approaches are impractical. To eliminate the need for GCPs, this study uses LiDAR data as a reference surface to perform in situ refinement of camera IOPs. The proposed refinement strategy is conducted in three main steps. An image-based sparse point cloud is first generated via a GNSS/INS-assisted SfM strategy. Then, LiDAR points corresponding to the resultant image-based sparse point cloud are identified through an iterative plane fitting approach and are referred to as LiDAR control points (LCPs). Finally, IOPs of the utilized camera are refined through a GNSS/INS-assisted bundle adjustment procedure using LCPs. Seven datasets over two study sites with a variety of geomorphic features are used to evaluate the performance of the developed strategy. The results illustrate the ability of the proposed approach to achieve an object space absolute accuracy of 3–5 cm (i.e., 5–10 times the ground sampling distance) at a 41 m flying height.


Author(s):  
E. Maset ◽  
L. Magri ◽  
I. Toschi ◽  
A. Fusiello

Abstract. This paper deals with bundle adjustment with constrained cameras, i.e. where the orientation of certain cameras is expressed relatively to others, and these relative orientations are part of the unknowns. Despite the remarkable interest for oblique multi-camera systems, an empirical study on the effect of enforcing relative orientation constraints in bundle adjustment is still missing. We provide experimental evidence that indeed these constraints improve the accuracy of the results, while reducing the computational load as well. Moreover, we report for the first time in the literature the complete derivation of the Jacobian matrix for bundle adjustment with constrained cameras, to foster other implementations.


2020 ◽  
Vol 12 (18) ◽  
pp. 3002
Author(s):  
Petra Helmholz ◽  
Derek D. Lichti

The number of researchers utilising imagery for the 3D reconstruction of underwater natural (e.g., reefs) and man-made structures (e.g., shipwrecks) is increasing. Often, the same procedures and software solutions are used for processing the images as in-air without considering additional aberrations that can be caused by the change of the medium from air to water. For instance, several publications mention the presence of chromatic aberration (CA). The aim of this paper is to investigate CA effects in low-cost camera systems (several GoPro cameras) operated in an underwater environment. We found that underwater and in-air distortion profiles differed by more than 1000 times in terms of maximum displacement and in terms of curvature. Moreover, significant CA effects were found in the underwater profiles that did not exist in-air. Furthermore, the paper investigates the effect of adjustment constraints imposed on the underwater self-calibration and the reliability of the interior orientation parameters. The analysis of the precision shows that in-air RMS values are just due to random errors. In contrast, the underwater calibration RMS values are 3x-6x higher than the exterior orientation parameter (EOP) precision, so these values contain both random error and the systematic effects from the CA. The accuracy assessment shows significant differences.


Author(s):  
H. Hastedt ◽  
T. Luhmann ◽  
H.-J. Przybilla ◽  
R. Rofallski

Abstract. For optical 3D measurements in close-range and UAV applications, the modelling of interior orientation is of superior importance in order to subsequently allow for high precision and accuracy in geometric 3D reconstruction. Nowadays, modern camera systems are often used for optical 3D measurements due to UAV payloads and economic purposes. They are constructed of aspheric and spherical lens combinations and include image pre-processing like low-pass filtering or internal distortion corrections that may lead to effects in image space not being considered with the standard interior orientation models. With a variety of structure-from-motion (SfM) data sets, four typical systematic patterns of residuals could be observed. These investigations focus on the evaluation of interior orientation modelling with respect to minimising systematics given in image space after bundle adjustment. The influences are evaluated with respect to interior and exterior orientation parameter changes and their correlations as well as the impact in object space. With the variety of data sets, camera/lens/platform configurations and pre-processing influences, these investigations indicate a number of different behaviours. Some specific advices in the usage of extended interior orientation models, like Fourier series, could be derived for a selection of the data sets. Significant reductions of image space systematics are achieved. Even though increasing standard deviations and correlations for the interior orientation parameters are a consequence, improvements in object space precision and image space reliability could be reached.


Author(s):  
F. He ◽  
A. Habib

In this paper, we present a novel linear approach for the initial recovery of the exterior orientation parameters (EOPs) of images. Similar to the conventional Structure from Motion (SfM) algorithm, the proposed approach is based on a two-step strategy. In the first step, the relative orientation of all possible image stereo-pairs is estimated. In the second step, a local coordinate frame is established, and an incremental image augmentation process is implemented to reference all the remaining images into a local coordinate frame. Since our approach is based on a linear solution for both the relative orientation estimation as well as the initial recovery of the image EOPs, it does not require any initial approximation for the optimization process. Another advantage of our approach is that it does not require any prior knowledge regarding the sequence of the image collection procedure, therefore, it can handle a set of randomly collected images in the absence of GNSS/INS information. In order to illustrate the feasibility of our approach, several experimental tests are conducted on real datasets captured in either a block or linear trajectory configuration. The results demonstrate that the initial image EOPs obtained are accurate and can serve as a good initialization for an additional bundle adjustment process.


Author(s):  
B. Elnashef ◽  
S. Filin

Abstract. While accuracy, detail, and limited time on site make photogrammetry a valuable means for underwater mapping, the establishment of reference control networks in such settings is oftentimes difficult. In that respect, the use of the coplanarity constraint becomes a valuable solution as it requires neither knowledge of object space coordinates nor setting a reference control network. Nonetheless, imaging in such domains is subjected to non-linear and depth-dependent distortions, which are caused by refractive media that alter the standard single viewpoint geometry. Accordingly, the coplanarity relation, as formulated for the in-air case does not hold in such environment and methods that have been proposed thus far for geometrical modeling of its effect require knowledge of object-space quantities. In this paper we propose a geometrically-driven approach which fulfills the coplanarity condition and thereby requires no knowledge of object space data. We also study a linear model for the establishment of this constraints. Clearly, a linear form requires neither first approximations nor iterative convergence scheme. Such an approach may prove useful not only for object space reconstruction but also as a preparatory step for application of bundle block adjustment and for outlier detection. All are key features in photogrammetric practices. Results show that no unique setup is needed for estimating the relative orientation parameters using the model and that high levels of accuracy can be achieved.


Author(s):  
A. Hanel ◽  
L. Hoegner ◽  
U. Stilla

Stereo camera systems in cars are often used to estimate the distance of other road users from the car. This information is important to improve road safety. Such camera systems are typically mounted behind the windshield of the car. In this contribution, the influence of the windshield on the estimated distance values is analyzed. An offline stereo camera calibration is performed with a moving planar calibration target. In a standard procedure bundle adjustment, the relative orientation of the cameras is estimated. The calibration is performed for the identical stereo camera system with and without a windshield in between. The base lengths are derived from the relative orientation in both cases and are compared. Distance values are calculated and analyzed. It can be shown, that the difference of the base length values in the two cases is highly significant. Resulting effects on the distance calculation up to a half meter occur.


Author(s):  
L. F. Castanheiro ◽  
A. M. G. Tommaselli ◽  
M. B. Campos ◽  
A. Berveglieri

Abstract. Fisheye cameras have been widely used in photogrammetric applications, but conventional techniques must be adapted to consider specific features of fisheye images, such as nonuniform resolution in the images. This work presents experimental results of an adaptive weighting of the observation in a self-calibrating bundle adjustment to cope with the nonuniform resolution of fisheye images. GoPro Fusion and Ricoh Theta dual-fisheye systems were calibrated with bundle adjustment based on equisolid-angle projection model combined with Conrady-Brown distortion model. The image observations were weighted as a function of radial distance based on combining loss of resolution and blurring in fisheye images. The results were compared with a similar trial by considering the same standard deviation for all image observations. The use of adaptive weighting of image observations reduced the estimated standard deviation of unit weight by 30 % and 50 % with GoPro Fusion and Ricoh Theta images, respectively. The estimation of relative orientation parameters (ROPs) was also improved (∼50 %) when using adaptive weighting for image observations.


2021 ◽  
Vol 87 (5) ◽  
pp. 375-384
Author(s):  
Letícia Ferrari Castanheiro ◽  
Antonio Maria Garcia Tommaselli ◽  
Adilson Berveglieri ◽  
Mariana Batista Campos ◽  
José Marcato Junior

Omnidirectional systems composed of two hyperhemispherical lenses (dual-fish-eye systems) are gaining popularity, but only a few works have studied suitable models for hyperhemispherical lenses and dual-fish-eye calibration. In addition, the effects of using points in the hyperhemispherical field of view in photogrammetric procedures have not been addressed. This article presents a comparative analysis of the fish-eye models (equidistant, equisolid-angle, stereographic, and orthogonal) for hyperhemispherical-lens and dual-fish-eye calibration techniques. The effects of adding points beyond 180° field of view in dual-fish-eye calibration using stability constraints of relative orientation parameters are also assessed. The experiments were performed with the Ricoh Theta dual-fish-eye system, which is composed of fish-eye lenses with a field of view of approximately 190° each. The equisolid-angle model presented the best results in the simultaneous calibration experiments. An accuracy of approximately one pixel in the object space units was achieved, showing the potential of the proposed approach for close-range applications.


Sign in / Sign up

Export Citation Format

Share Document