scholarly journals DIRECT ESTIMATION OF THE RELATIVE ORIENTATION IN UNDERWATER ENVIRONMENT

Author(s):  
B. Elnashef ◽  
S. Filin

Abstract. While accuracy, detail, and limited time on site make photogrammetry a valuable means for underwater mapping, the establishment of reference control networks in such settings is oftentimes difficult. In that respect, the use of the coplanarity constraint becomes a valuable solution as it requires neither knowledge of object space coordinates nor setting a reference control network. Nonetheless, imaging in such domains is subjected to non-linear and depth-dependent distortions, which are caused by refractive media that alter the standard single viewpoint geometry. Accordingly, the coplanarity relation, as formulated for the in-air case does not hold in such environment and methods that have been proposed thus far for geometrical modeling of its effect require knowledge of object-space quantities. In this paper we propose a geometrically-driven approach which fulfills the coplanarity condition and thereby requires no knowledge of object space data. We also study a linear model for the establishment of this constraints. Clearly, a linear form requires neither first approximations nor iterative convergence scheme. Such an approach may prove useful not only for object space reconstruction but also as a preparatory step for application of bundle block adjustment and for outlier detection. All are key features in photogrammetric practices. Results show that no unique setup is needed for estimating the relative orientation parameters using the model and that high levels of accuracy can be achieved.

2021 ◽  
Vol 87 (5) ◽  
pp. 375-384
Author(s):  
Letícia Ferrari Castanheiro ◽  
Antonio Maria Garcia Tommaselli ◽  
Adilson Berveglieri ◽  
Mariana Batista Campos ◽  
José Marcato Junior

Omnidirectional systems composed of two hyperhemispherical lenses (dual-fish-eye systems) are gaining popularity, but only a few works have studied suitable models for hyperhemispherical lenses and dual-fish-eye calibration. In addition, the effects of using points in the hyperhemispherical field of view in photogrammetric procedures have not been addressed. This article presents a comparative analysis of the fish-eye models (equidistant, equisolid-angle, stereographic, and orthogonal) for hyperhemispherical-lens and dual-fish-eye calibration techniques. The effects of adding points beyond 180° field of view in dual-fish-eye calibration using stability constraints of relative orientation parameters are also assessed. The experiments were performed with the Ricoh Theta dual-fish-eye system, which is composed of fish-eye lenses with a field of view of approximately 190° each. The equisolid-angle model presented the best results in the simultaneous calibration experiments. An accuracy of approximately one pixel in the object space units was achieved, showing the potential of the proposed approach for close-range applications.


Author(s):  
O. Kahmen ◽  
R. Rofallski ◽  
N. Conen ◽  
T. Luhmann

<p><strong>Abstract.</strong> In multimedia photogrammetry, multi-camera systems often provide scale by a calibrated relative orientation. Camera calibration via bundle adjustment is a well-established standard procedure in single-medium photogrammetry. When using standard software and applying the collinearity equations in multimedia photogrammetry, the refractive interfaces are modelled in an implicit form. This contribution analyses different calibration strategies for bundle-invariant interfaces. To evaluate the effects of implicitly modelling the refractive effects within a bundle adjustment, synthetic datasets are simulated. Contrary to many publications, systematic effects of the exterior orientations can be verified with simulated data. The behaviour of interior, exterior and relative orientation parameters is analysed using error-free synthetic datasets. The relative orientation of a stereo camera shows systematic effects, when the angle of convergence varies and when the synthetic interface is set up at different distances to the camera. It becomes clear, that in most cases the implicit modelling is not suitable for multimedia photogrammetry. An explicit modelling of the refractive interfaces is implemented into a bundle adjustment. This strict model is analysed and compared with the implicit form regarding systematic effects in orientation parameters as well as errors in object space. In a real experiment, the discrepancies between the implicit form using standard software and the explicit modelling using our own implementation are quantified. It is highly advisable to model the interfaces strictly, since the implicit modelling might lead to relevant errors in object space.</p>


Author(s):  
E. Maset ◽  
E. Rupnik ◽  
M. Pierrot-Deseilligny ◽  
F. Remondino ◽  
A. Fusiello

Abstract. The growing deployment of multi-head camera systems encouraged the emergence of specific processing algorithms, able to face the challenges posed by slanted view geometry. Such multi-camera systems are rigidly tied by their manufacturers hence the exploitation of this internal constraint should be further exploited. Several approaches have been proposed to deal with orientation constraints, with the aim of reducing the number of unknowns, computational time and possibly improve the accuracy. In this paper we compare the results provided by publicly available implementations in order to further investigate the advantages of enforcing relative orientation constraints for aerial and terrestrial triangulation of multi-head camera systems. Data from a Leica CityMapper and a Stereopolis-Ladybug are considered, reporting how constrained solution can improve accuracy with respect to traditional (unconstrained) bundle block adjustment solutions.


Author(s):  
A. Al-Rawabdeh ◽  
H. Al-Gurrani ◽  
K. Al-Durgham ◽  
I. Detchev ◽  
F. He ◽  
...  

Landslides are among the major threats to urban landscape and manmade infrastructure. They often cause economic losses, property damages, and loss of lives. Temporal monitoring data of landslides from different epochs empowers the evaluation of landslide progression. Alignment of overlapping surfaces from two or more epochs is crucial for the proper analysis of landslide dynamics. The traditional methods for point-cloud-based landslide monitoring rely on using a variation of the Iterative Closest Point (ICP) registration procedure to align any reconstructed surfaces from different epochs to a common reference frame. However, sometimes the ICP-based registration can fail or may not provide sufficient accuracy. For example, point clouds from different epochs might fit to local minima due to lack of geometrical variability within the data. Also, manual interaction is required to exclude any non-stable areas from the registration process. In this paper, a robust image-based registration method is introduced for the simultaneous evaluation of all registration parameters. This includes the Interior Orientation Parameters (IOPs) of the camera and the Exterior Orientation Parameters (EOPs) of the involved images from all available observation epochs via a bundle block adjustment with self-calibration. Next, a semi-global dense matching technique is implemented to generate dense 3D point clouds for each epoch using the images captured in a particular epoch separately. The normal distances between any two consecutive point clouds can then be readily computed, because the point clouds are already effectively co-registered. A low-cost DJI Phantom II Unmanned Aerial Vehicle (UAV) was customised and used in this research for temporal data collection over an active soil creep area in Lethbridge, Alberta, Canada. The customisation included adding a GPS logger and a Large-Field-Of-View (LFOV) action camera which facilitated capturing high-resolution geo-tagged images in two epochs over the period of one year (i.e., May 2014 and May 2015). Note that due to the coarse accuracy of the on-board GPS receiver (e.g., +/- 5-10 m) the geo-tagged positions of the images were only used as initial values in the bundle block adjustment. Normal distances, signifying detected changes, varying from 20 cm to 4 m were identified between the two epochs. The accuracy of the co-registered surfaces was estimated by comparing non-active patches within the monitored area of interest. Since these non-active sub-areas are stationary, the computed normal distances should theoretically be close to zero. The quality control of the registration results showed that the average normal distance was approximately 4 cm, which is within the noise level of the reconstructed surfaces.


Author(s):  
S. Rhee ◽  
T. Kim

3D spatial information from unmanned aerial vehicles (UAV) images is usually provided in the form of 3D point clouds. For various UAV applications, it is important to generate dense 3D point clouds automatically from over the entire extent of UAV images. In this paper, we aim to apply image matching for generation of local point clouds over a pair or group of images and global optimization to combine local point clouds over the whole region of interest. We tried to apply two types of image matching, an object space-based matching technique and an image space-based matching technique, and to compare the performance of the two techniques. The object space-based matching used here sets a list of candidate height values for a fixed horizontal position in the object space. For each height, its corresponding image point is calculated and similarity is measured by grey-level correlation. The image space-based matching used here is a modified relaxation matching. We devised a global optimization scheme for finding optimal pairs (or groups) to apply image matching, defining local match region in image- or object- space, and merging local point clouds into a global one. For optimal pair selection, tiepoints among images were extracted and stereo coverage network was defined by forming a maximum spanning tree using the tiepoints. From experiments, we confirmed that through image matching and global optimization, 3D point clouds were generated successfully. However, results also revealed some limitations. In case of image-based matching results, we observed some blanks in 3D point clouds. In case of object space-based matching results, we observed more blunders than image-based matching ones and noisy local height variations. We suspect these might be due to inaccurate orientation parameters. The work in this paper is still ongoing. We will further test our approach with more precise orientation parameters.


2020 ◽  
Vol 12 (14) ◽  
pp. 2268
Author(s):  
Tian Zhou ◽  
Seyyed Meghdad Hasheminasab ◽  
Radhika Ravi ◽  
Ayman Habib

Unmanned aerial vehicles (UAVs) are quickly emerging as a popular platform for 3D reconstruction/modeling in various applications such as precision agriculture, coastal monitoring, and emergency management. For such applications, LiDAR and frame cameras are the two most commonly used sensors for 3D mapping of the object space. For example, point clouds for the area of interest can be directly derived from LiDAR sensors onboard UAVs equipped with integrated global navigation satellite systems and inertial navigation systems (GNSS/INS). Imagery-based mapping, on the other hand, is considered to be a cost-effective and practical option and is often conducted by generating point clouds and orthophotos using structure from motion (SfM) techniques. Mapping with photogrammetric approaches requires accurate camera interior orientation parameters (IOPs), especially when direct georeferencing is utilized. Most state-of-the-art approaches for determining/refining camera IOPs depend on ground control points (GCPs). However, establishing GCPs is expensive and labor-intensive, and more importantly, the distribution and number of GCPs are usually less than optimal to provide adequate control for determining and/or refining camera IOPs. Moreover, consumer-grade cameras with unstable IOPs have been widely used for mapping applications. Therefore, in such scenarios, where frequent camera calibration or IOP refinement is required, GCP-based approaches are impractical. To eliminate the need for GCPs, this study uses LiDAR data as a reference surface to perform in situ refinement of camera IOPs. The proposed refinement strategy is conducted in three main steps. An image-based sparse point cloud is first generated via a GNSS/INS-assisted SfM strategy. Then, LiDAR points corresponding to the resultant image-based sparse point cloud are identified through an iterative plane fitting approach and are referred to as LiDAR control points (LCPs). Finally, IOPs of the utilized camera are refined through a GNSS/INS-assisted bundle adjustment procedure using LCPs. Seven datasets over two study sites with a variety of geomorphic features are used to evaluate the performance of the developed strategy. The results illustrate the ability of the proposed approach to achieve an object space absolute accuracy of 3–5 cm (i.e., 5–10 times the ground sampling distance) at a 41 m flying height.


Author(s):  
J. F. C. Silva ◽  
M. C. Lemes Neto ◽  
V. Blasechi

The photogrammetric bridging or traverse is a special bundle block adjustment (BBA) for connecting a sequence of stereo-pairs and of determining the exterior orientation parameters (EOP). An object point must be imaged in more than one stereo-pair. In each stereo-pair the distance ratio between an object and its corresponding image point varies significantly. We propose to automate the photogrammetric bridging based on a fully automatic extraction of homologous points in stereo-pairs and on an arbitrary Cartesian datum to refer the EOP and tie points. The technique uses SIFT algorithm and the keypoint matching is given by similarity descriptors of each keypoint based on the smallest distance. All the matched points are used as tie points. The technique was applied initially to two pairs. The block formed by four images was treated by BBA. The process follows up to the end of the sequence and it is semiautomatic because each block is processed independently and the transition from one block to the next depends on the operator. Besides four image blocks (two pairs), we experimented other arrangements with block sizes of six, eight, and up to twenty images (respectively, three, four, five and up to ten bases). After the whole image pairs sequence had sequentially been adjusted in each experiment, a simultaneous BBA was run so to estimate the EOP set of each image. The results for classical ("normal case") pairs were analyzed based on standard statistics regularly applied to phototriangulation, and they show figures to validate the process.


Author(s):  
E. Maset ◽  
L. Magri ◽  
I. Toschi ◽  
A. Fusiello

Abstract. This paper deals with bundle adjustment with constrained cameras, i.e. where the orientation of certain cameras is expressed relatively to others, and these relative orientations are part of the unknowns. Despite the remarkable interest for oblique multi-camera systems, an empirical study on the effect of enforcing relative orientation constraints in bundle adjustment is still missing. We provide experimental evidence that indeed these constraints improve the accuracy of the results, while reducing the computational load as well. Moreover, we report for the first time in the literature the complete derivation of the Jacobian matrix for bundle adjustment with constrained cameras, to foster other implementations.


Author(s):  
A. Al-Rawabdeh ◽  
H. Al-Gurrani ◽  
K. Al-Durgham ◽  
I. Detchev ◽  
F. He ◽  
...  

Landslides are among the major threats to urban landscape and manmade infrastructure. They often cause economic losses, property damages, and loss of lives. Temporal monitoring data of landslides from different epochs empowers the evaluation of landslide progression. Alignment of overlapping surfaces from two or more epochs is crucial for the proper analysis of landslide dynamics. The traditional methods for point-cloud-based landslide monitoring rely on using a variation of the Iterative Closest Point (ICP) registration procedure to align any reconstructed surfaces from different epochs to a common reference frame. However, sometimes the ICP-based registration can fail or may not provide sufficient accuracy. For example, point clouds from different epochs might fit to local minima due to lack of geometrical variability within the data. Also, manual interaction is required to exclude any non-stable areas from the registration process. In this paper, a robust image-based registration method is introduced for the simultaneous evaluation of all registration parameters. This includes the Interior Orientation Parameters (IOPs) of the camera and the Exterior Orientation Parameters (EOPs) of the involved images from all available observation epochs via a bundle block adjustment with self-calibration. Next, a semi-global dense matching technique is implemented to generate dense 3D point clouds for each epoch using the images captured in a particular epoch separately. The normal distances between any two consecutive point clouds can then be readily computed, because the point clouds are already effectively co-registered. A low-cost DJI Phantom II Unmanned Aerial Vehicle (UAV) was customised and used in this research for temporal data collection over an active soil creep area in Lethbridge, Alberta, Canada. The customisation included adding a GPS logger and a Large-Field-Of-View (LFOV) action camera which facilitated capturing high-resolution geo-tagged images in two epochs over the period of one year (i.e., May 2014 and May 2015). Note that due to the coarse accuracy of the on-board GPS receiver (e.g., +/- 5-10 m) the geo-tagged positions of the images were only used as initial values in the bundle block adjustment. Normal distances, signifying detected changes, varying from 20 cm to 4 m were identified between the two epochs. The accuracy of the co-registered surfaces was estimated by comparing non-active patches within the monitored area of interest. Since these non-active sub-areas are stationary, the computed normal distances should theoretically be close to zero. The quality control of the registration results showed that the average normal distance was approximately 4 cm, which is within the noise level of the reconstructed surfaces.


Sign in / Sign up

Export Citation Format

Share Document