scholarly journals ARCHIVE AND WARTIME AERIAL PHOTOGRAPHS AND PROCEDURES OF THEIR TREATMENT

Author(s):  
V. Šafář ◽  
H. Staňková ◽  
J. Pospíšil ◽  
D. Kaňa

Abstract. The article presents with the use of archive aerial photographs. The first task was to search and identify drainage detail from archive aerial photographs. The second task is to create procedures for processing aerial reconnaissance images (from WWII) to identify sites with potential pyrotechnic load. Both of these tasks are connecting by the effort to determine the internal orientation parameters of the cameras for using and easier calculation of exterior parameters by image correlation. Complete automation process searching of fiducial mark (FM) identification was implemented. The coordinates of all FM are calculated automatically from archive aerial photographs. In addition, the edges of the photographs are automatically found and a program was created to minimize of the cropping of the archive aerial photographs. The next part of the paper describes the procedures of averaging the values of the relative position of FM and transforming archive aerial photographs to a uniform dimension from a set of images taken with the same camera. The second part of the paper describes the process of creating a historical ortophoto with the standard calculation of bundle adjustment performed by an external process in the background of the OrthoEngine module using the Celery library installed as a python service. Finding of external image orientation parameters through bundle adjustment calculation are parameters, in the first, defined in the local system and then transformed into the national geodetic system of the Czech Republic. This entire section is available and free to use for on the internet. The third part of the article describes the practical procedure of the interpretation of archive and wartime photographs with aim of identification of the drainage detail and the procedures leading to the interpretation, identification, location and calculation of the position of unexploded air ammunition.

Author(s):  
M. Reich ◽  
C. Heipke

In this paper we propose a novel workflow for the estimation of global image orientations given relative orientations between pairs of overlapping images. Our approach is convex and independent on initial values. First, global rotations are estimated in a relaxed semidefinite program (SDP) and refined in an iterative least squares adjustment in the tangent space of SO(3). A critical aspect is the handling of outliers in the relative orientations. We present a novel heuristic graph based approach for filtering the relative rotations that outperforms state-of-the-art robust rotation averaging algorithms. In a second part we make use of point-observations, tracked over a set of overlapping images and formulate a linear homogeneous system of equations to transfer the scale information between triplets of images, using estimated global rotations and relative translation directions. The final step consists of refining the orientation parameters in a robust bundle adjustment. The proposed approach handles outliers in the homologous points and relative orientations in every step of the processing chain. We demonstrate the robustness of the procedure on synthetic data. Moreover, the performance of our approach is illustrated on real world benchmark data.


Author(s):  
M. Reich ◽  
C. Heipke

In this paper we propose a novel workflow for the estimation of global image orientations given relative orientations between pairs of overlapping images. Our approach is convex and independent on initial values. First, global rotations are estimated in a relaxed semidefinite program (SDP) and refined in an iterative least squares adjustment in the tangent space of SO(3). A critical aspect is the handling of outliers in the relative orientations. We present a novel heuristic graph based approach for filtering the relative rotations that outperforms state-of-the-art robust rotation averaging algorithms. In a second part we make use of point-observations, tracked over a set of overlapping images and formulate a linear homogeneous system of equations to transfer the scale information between triplets of images, using estimated global rotations and relative translation directions. The final step consists of refining the orientation parameters in a robust bundle adjustment. The proposed approach handles outliers in the homologous points and relative orientations in every step of the processing chain. We demonstrate the robustness of the procedure on synthetic data. Moreover, the performance of our approach is illustrated on real world benchmark data.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1091
Author(s):  
Izaak Van Crombrugge ◽  
Rudi Penne ◽  
Steve Vanlanduit

Knowledge of precise camera poses is vital for multi-camera setups. Camera intrinsics can be obtained for each camera separately in lab conditions. For fixed multi-camera setups, the extrinsic calibration can only be done in situ. Usually, some markers are used, like checkerboards, requiring some level of overlap between cameras. In this work, we propose a method for cases with little or no overlap. Laser lines are projected on a plane (e.g., floor or wall) using a laser line projector. The pose of the plane and cameras is then optimized using bundle adjustment to match the lines seen by the cameras. To find the extrinsic calibration, only a partial overlap between the laser lines and the field of view of the cameras is needed. Real-world experiments were conducted both with and without overlapping fields of view, resulting in rotation errors below 0.5°. We show that the accuracy is comparable to other state-of-the-art methods while offering a more practical procedure. The method can also be used in large-scale applications and can be fully automated.


Author(s):  
J. Unger ◽  
F. Rottensteiner ◽  
C. Heipke

A hybrid bundle adjustment is presented that allows for the integration of a generalised building model into the pose estimation of image sequences. These images are captured by an Unmanned Aerial System (UAS) equipped with a camera flying in between the buildings. The relation between the building model and the images is described by distances between the object coordinates of the tie points and building model planes. Relations are found by a simple 3D distance criterion and are modelled as fictitious observations in a Gauss-Markov adjustment. The coordinates of model vertices are part of the adjustment as directly observed unknowns which allows for changes in the model. Results of first experiments using a synthetic and a real image sequence demonstrate improvements of the image orientation in comparison to an adjustment without the building model, but also reveal limitations of the current state of the method.


Author(s):  
A. Berveglieri ◽  
A. M. G. Tommaselli ◽  
E. Honkavaara

Hyperspectral camera operating in sequential acquisition mode produces spectral bands that are not recorded at the same instant, thus having different exterior orientation parameters (EOPs) for each band. The study presents experiments on bundle adjustment with time-dependent polynomial models for band orientation of hyperspectral cubes sequentially collected. The technique was applied to a Rikola camera model. The purpose was to investigate the behaviour of the estimated polynomial parameters and the feasibility of using a minimum of bands to estimate EOPs. Simulated and real data were produced for the analysis of parameters and accuracy in ground points. The tests considered conventional bundle adjustment and the polynomial models. The results showed that both techniques were comparable, indicating that the time-dependent polynomial model can be used to estimate the EOPs of all spectral bands, without requiring a bundle adjustment of each band. The accuracy of the block adjustment was analysed based on the discrepancy obtained from checkpoints. The root mean square error (RMSE) indicated an accuracy of 1 GSD in planimetry and 1.5 GSD in altimetry, when using a minimum of four bands per cube.


Author(s):  
Z. Kurczynski ◽  
K. Bakuła ◽  
M. Karabin ◽  
M. Kowalczyk ◽  
J. S. Markiewicz ◽  
...  

Updating the cadastre requires much work carried out by surveying companies in countries that have still not solved the problem of updating the cadastral data. In terms of the required precision, these works are among the most accurate. This raises the question: to what extent may modern digital photogrammetric methods be useful in this process? The capabilities of photogrammetry have increased significantly after the introduction of digital aerial cameras and digital technologies. For the registration of cadastral objects, i.e., land parcels’ boundaries and the outlines of buildings, very high-resolution aerial photographs can be used. The paper relates an attempt to use an alternative source of data for this task - the development of images acquired from UAS platforms. Multivariate mapping of cadastral parcels was implemented to determine the scope of the suitability of low altitude photos for the cadastre. In this study, images obtained from UAS with the GSD of 3 cm were collected for an area of a few square kilometres. Bundle adjustment of these data was processed with sub-pixel accuracy. This led to photogrammetric measurements being carried out and the provision of an orthophotomap (orthogonalized with a digital surface model from dense image matching of UAS images). Geometric data related to buildings were collected with two methods: stereoscopic and multi-photo measurements. Data related to parcels’ boundaries were measured with monoplotting on an orthophotomap from low-altitude images. As reference field surveying data were used. The paper shows the potential and limits of the use of UAS in a process of updating cadastral data. It also gives recommendations when performing photogrammetric missions and presents the possible accuracy of this type of work.


Author(s):  
M. Gerke ◽  
F. Nex ◽  
F. Remondino ◽  
K. Jacobsen ◽  
J. Kremer ◽  
...  

During the last decade the use of airborne multi camera systems increased significantly. The development in digital camera technology allows mounting several mid- or small-format cameras efficiently onto one platform and thus enables image capture under different angles. Those oblique images turn out to be interesting for a number of applications since lateral parts of elevated objects, like buildings or trees, are visible. However, occlusion or illumination differences might challenge image processing. From an image orientation point of view those multi-camera systems bring the advantage of a better ray intersection geometry compared to nadir-only image blocks. On the other hand, varying scale, occlusion and atmospheric influences which are difficult to model impose problems to the image matching and bundle adjustment tasks. In order to understand current limitations of image orientation approaches and the influence of different parameters such as image overlap or GCP distribution, a commonly available dataset was released. The originally captured data comprises of a state-of-the-art image block with very high overlap, but in the first stage of the so-called ISPRS/EUROSDR benchmark on multi-platform photogrammetry only a reduced set of images was released. In this paper some first results obtained with this dataset are presented. They refer to different aspects like tie point matching across the viewing directions, influence of the oblique images onto the bundle adjustment, the role of image overlap and GCP distribution. As far as the tie point matching is concerned we observed that matching of overlapping images pointing to the same cardinal direction, or between nadir and oblique views in general is quite successful. Due to the quite different perspective between images of different viewing directions the standard tie point matching, for instance based on interest points does not work well. How to address occlusion and ambiguities due to different views onto objects is clearly a non-solved research problem so far. In our experiments we also confirm that the obtainable height accuracy is better when all images are used in bundle block adjustment. This was also shown in other research before and is confirmed here. Not surprisingly, the large overlap of 80/80% provides much better object space accuracy – random errors seem to be about 2-3fold smaller compared to the 60/60% overlap. A comparison of different software approaches shows that newly emerged commercial packages, initially intended to work with small frame image blocks, do perform very well.


2020 ◽  
Vol 12 (14) ◽  
pp. 2268
Author(s):  
Tian Zhou ◽  
Seyyed Meghdad Hasheminasab ◽  
Radhika Ravi ◽  
Ayman Habib

Unmanned aerial vehicles (UAVs) are quickly emerging as a popular platform for 3D reconstruction/modeling in various applications such as precision agriculture, coastal monitoring, and emergency management. For such applications, LiDAR and frame cameras are the two most commonly used sensors for 3D mapping of the object space. For example, point clouds for the area of interest can be directly derived from LiDAR sensors onboard UAVs equipped with integrated global navigation satellite systems and inertial navigation systems (GNSS/INS). Imagery-based mapping, on the other hand, is considered to be a cost-effective and practical option and is often conducted by generating point clouds and orthophotos using structure from motion (SfM) techniques. Mapping with photogrammetric approaches requires accurate camera interior orientation parameters (IOPs), especially when direct georeferencing is utilized. Most state-of-the-art approaches for determining/refining camera IOPs depend on ground control points (GCPs). However, establishing GCPs is expensive and labor-intensive, and more importantly, the distribution and number of GCPs are usually less than optimal to provide adequate control for determining and/or refining camera IOPs. Moreover, consumer-grade cameras with unstable IOPs have been widely used for mapping applications. Therefore, in such scenarios, where frequent camera calibration or IOP refinement is required, GCP-based approaches are impractical. To eliminate the need for GCPs, this study uses LiDAR data as a reference surface to perform in situ refinement of camera IOPs. The proposed refinement strategy is conducted in three main steps. An image-based sparse point cloud is first generated via a GNSS/INS-assisted SfM strategy. Then, LiDAR points corresponding to the resultant image-based sparse point cloud are identified through an iterative plane fitting approach and are referred to as LiDAR control points (LCPs). Finally, IOPs of the utilized camera are refined through a GNSS/INS-assisted bundle adjustment procedure using LCPs. Seven datasets over two study sites with a variety of geomorphic features are used to evaluate the performance of the developed strategy. The results illustrate the ability of the proposed approach to achieve an object space absolute accuracy of 3–5 cm (i.e., 5–10 times the ground sampling distance) at a 41 m flying height.


Author(s):  
D. D. Lichti ◽  
D. Jarron ◽  
M. Shahbazi ◽  
P. Helmholz ◽  
R. Radovanovic

Abstract. Chromatic aberration in colour digital camera imagery can affect the accuracy of photogrammetric reconstruction. Both longitudinal and transverse chromatic aberrations can be effectively modelled by making separate measurements in each of the blue, green and red colour bands and performing a specialized self-calibrating bundle adjustment. This paper presents the results of an investigation with two aims. The first aim is to quantify the presence of chromatic aberration in two sets of cameras: the six individual cameras comprising a Ladybug5 system, calibrated simultaneously in air; and four GoPro Hero 5 cameras calibrated independently under water. The second aim is to investigate the impacts of imposing different constraints in the self-calibration adjustment. To this end, four different adjustment cases were performed for all ten cameras: independent adjustment of the observations from each colour band; combined adjustment of all colour bands’ observations with common object points; combined adjustment of all colour bands with common object points and common exterior orientation parameters for each colour band triplet; and combined adjustment with common object points and certain common interior orientation parameters. The results show that the Ladybug5 cameras exhibit a small (1-2 pixel) amount of transverse chromatic aberration but no longitudinal chromatic aberration. The GoPro Hero 5 cameras exhibit significant (25 pixel) transverse chromatic aberration as well as longitudinal chromatic aberration. The principal distance was essentially independent of the adjustment case for the Ladybug5, but it was not for the GoPro Hero 5. The principal point position and precision were both affected considerably by adjustment case. Radial lens distortion was invariant to the adjustment case. The impact of adjustment case on decentring distortion was minimal in both cases.


Author(s):  
Z. Xiong ◽  
D. Stanley ◽  
Y. Xin

The approximate value of exterior orientation parameters is needed for air photo bundle adjustment. Usually the air borne GPS/IMU can provide the initial value for the camera position and attitude angle. However, in some cases, the camera’s attitude angle is not available due to lack of IMU or other reasons. In this case, the kappa angle needs to be estimated for each photo before bundle adjustment. The kappa angle can be obtained from the Ground Control Points (GCPs) in the photo. Unfortunately it is not the case that enough GCPs are always available. In order to overcome this problem, an algorithm is developed to automatically estimate the kappa angle for air photos based on phase only correlation technique. This function has been embedded in PCI software. Extensive experiments show that this algorithm is fast, reliable, and stable.


Sign in / Sign up

Export Citation Format

Share Document