scholarly journals GLOBAL BUNDLE ADJUSTMENT WITH VARIABLE ORIENTATION POINT DISTANCE FOR PRECISE MARS EXPRESS ORBIT RECONSTRUCTION

Author(s):  
J. Bostelmann ◽  
C. Heipke

The photogrammetric bundle adjustment of line scanner image data requires a precise description of the time-dependent image orientation. For this task exterior orientation parameters of discrete points are used to model position and viewing direction of a camera trajectory via polynomials. This paper investigates the influence of the distance between these orientation points on the quality of trajectory modeling. A new method adapts the distance along the trajectory to the available image information. Compared to a constant distance as used previously, a better reconstruction of the exterior orientation is possible, especially when image quality changes within a strip. <br><br> In our research we use image strips of the High Resolution Stereo Camera (HRSC), taken to map the Martian surface. Several experiments on the global image data set have been carried out to investigate how the bundle adjustment improves the image orientation, if the new method is employed. For evaluation the forward intersection errors of 3D points derived from HRSC images, as well as their remaining height differences to the MOLA DTM are used. <br><br> In 13.5 % (515 of 3,828) of the image strips, taken during this ongoing mission over the last 12 years, high frequency image distortions were found. Bundle adjustment with a constant orientation point distance was able to reconstruct the orbit in 239 (46.4 %) cases. A variable orientation point distance increased this number to 507 (98.6 %).

Author(s):  
J. Bostelmann ◽  
C. Heipke

The photogrammetric bundle adjustment of line scanner image data requires a precise description of the time-dependent image orientation. For this task exterior orientation parameters of discrete points are used to model position and viewing direction of a camera trajectory via polynomials. This paper investigates the influence of the distance between these orientation points on the quality of trajectory modeling. A new method adapts the distance along the trajectory to the available image information. Compared to a constant distance as used previously, a better reconstruction of the exterior orientation is possible, especially when image quality changes within a strip. &lt;br&gt;&lt;br&gt; In our research we use image strips of the High Resolution Stereo Camera (HRSC), taken to map the Martian surface. Several experiments on the global image data set have been carried out to investigate how the bundle adjustment improves the image orientation, if the new method is employed. For evaluation the forward intersection errors of 3D points derived from HRSC images, as well as their remaining height differences to the MOLA DTM are used. &lt;br&gt;&lt;br&gt; In 13.5 % (515 of 3,828) of the image strips, taken during this ongoing mission over the last 12 years, high frequency image distortions were found. Bundle adjustment with a constant orientation point distance was able to reconstruct the orbit in 239 (46.4 %) cases. A variable orientation point distance increased this number to 507 (98.6 %).


Author(s):  
A. Cefalu ◽  
N. Haala ◽  
D. Fritsch

Global image orientation techniques aim at estimating camera rotations and positions for a whole set of images simultaneously. One of the main arguments for these procedures is an improved robustness against drifting of camera stations in comparison to more classical sequential approaches. Usually, the process consists of computation of absolute rotations and, in a second step, absolute positions for the cameras. Either the first or both steps rely on the network of transformations arising from relative orientations between cameras. Therefore, the quality of the obtained absolute results is influenced by tensions in the network. These may e.g. be induced by insufficient knowledge of the intrinsic camera parameters. Another reason can be found in local weaknesses of image connectivity. We apply a hierarchical approach with intermediate bundle adjustment to reduce these effects. We adopt efficient global techniques which register image triplets based on fixed absolute camera rotations and scaled relative camera translations but do not involve scene structure elements in the fusion step. Our variant employs submodels of arbitrary size, orientation and scale, by computing relative rotations and scales between - and subsequently absolute rotations and scales for - submodels and is applied hierarchically. Furthermore we substitute classical bundle adjustment by a structureless approach based on epipolar geometry and augmented with a scale consistency constraint.


Author(s):  
J. Bostelmann ◽  
U. Breitkopf ◽  
C. Heipke

For a systematic mapping of the Martian surface, the Mars Express orbiter is equipped with a multi-line scanner: Since the beginning of 2004 the High Resolution Stereo Camera (HRSC) regularly acquires long image strips. By now more than 4,000 strips covering nearly the whole planet are available. Due to the nine channels, each with different viewing direction, and partly with different optical filters, each strip provides 3D and color information and allows the generation of digital terrain models (DTMs) and orthophotos. <br><br> To map larger regions, neighboring HRSC strips can be combined to build DTM and orthophoto mosaics. The global mapping scheme Mars Chart 30 is used to define the extent of these mosaics. In order to avoid unreasonably large data volumes, each MC-30 tile is divided into two parts, combining about 90 strips each. To ensure a seamless fit of these strips, several radiometric and geometric corrections are applied in the photogrammetric process. A simultaneous bundle adjustment of all strips as a block is carried out to estimate their precise exterior orientation. Because size, position, resolution and image quality of the strips in these blocks are heterogeneous, also the quality and distribution of the tie points vary. In absence of ground control points, heights of a global terrain model are used as reference information, and for this task a regular distribution of these tie points is preferable. Besides, their total number should be limited because of computational reasons. <br><br> In this paper, we present an algorithm, which optimizes the distribution of tie points under these constraints. A large number of tie points used as input is reduced without affecting the geometric stability of the block by preserving connections between strips. This stability is achieved by using a regular grid in object space and discarding, for each grid cell, points which are redundant for the block adjustment. The set of tie points, filtered by the algorithm, shows a more homogenous distribution and is considerably smaller. Used for the block adjustment, it yields results of equal quality, with significantly shorter computation time. In this work, we present experiments with MC-30 half-tile blocks, which confirm our idea for reaching a stable and faster bundle adjustment. The described method is used for the systematic processing of HRSC data.


2020 ◽  
Author(s):  
Camillo Ressl ◽  
Wilfried Karel ◽  
Livia Piermattei ◽  
Gerhard Puercher ◽  
Markus Hollaus ◽  
...  

&lt;p&gt;After World War II, aerial photography i.e. vertical or oblique high-resolution aerial images spread rapidly into civil research sectors, such as landscape studies, geologic maps, natural sciences, archaeology, and more. Applying photogrammetric techniques, two or more overlapping historical aerial images can be used to generate an orthophoto and a 3D point cloud, wherefrom a digital elevation model can be derived for the respective epoch. Combining results from different epochs, morphological processes and elevation changes of the surface caused by anthropogenic and natural factors can be assessed. Despite the unequalled potential of such data, their use is not yet fully exploited. Indeed, there is a lack of clear processing workflows applying either traditional photogrammetric techniques or structure from motion (SfM) with camera self-calibration. In fact, on the one hand, many SfM and multi-view stereo software do not deal with scanned images. On the other hand, traditional photogrammetric approaches require information such as a camera calibration protocol with fiducial mark positions. Furthermore, the quality of the generated products is strongly affected by the quality of the scanned images, in terms of the conservation of the original film, scanner resolution, and acquisition parameters like image overlap and flying height.&lt;/p&gt;&lt;p&gt;To process a large dataset of historical images, an approach based on multi-epoch bundle adjustment has been suggested recently.&amp;#160; The idea is to jointly orient the images of all epochs of a historical image dataset. This recent approach relies on the robustness of the scale-invariant feature transform (SIFT) algorithm to automatically detect common features between images of the time series located in stable areas. However, this approach cannot be applied to process digital images of alpine environments, characterized by continuous changes also of small magnitude that might be challenging to automatically identify in image space. In this respect, our method implemented in OrientAL, a software developed by TU Wien, identifies stable areas in object space across the entire time series. After the joint orientation of the multi-epoch aerial images, dense image matching is performed independently for each epoch. We tested our method on an image block over the alpine catchment Kaunertal (Austria), captured at nine different epochs with a time span of fifty years. Our method definitely speeds up the process of image orientation of the entire data set, since stable areas do not need to be masked manually in each image. Furthermore, we could improve the orientation of images from epochs with poor overlap. To estimate the improvements obtained with our methods in terms of time and accuracy of the image orientation, we compare our results with photogrammetric and commercial SfM software and we analyse the accuracy of tie points with respect to a reference Lidar point cloud. The work is part of the SEHAG project (project number I 4062) funded by the DFG and FWF.&lt;/p&gt;


Author(s):  
F. Kurz ◽  
T. Krauß ◽  
H. Runge ◽  
D. Rosenbaum ◽  
P. d’Angelo

<p><strong>Abstract.</strong> Highly precise ground control points, which are globally available, can be derived from the SAR satellite TerraSAR-X. This opens up many new applications like for example the precise aerial image orientation. In this paper, we propose a method for precise aerial image orientation using spaceborne geodetic Synthetic Aperture Radar Ground Control Points (SAR-GCPs). The precisely oriented aerial imagery can then be used e.g. for mapping of urban landmarks, which support the ego-positioning of autonomous cars. The method for precise image orientation was validated based on two aerial image data sets. SAR-GCPs were measured in images, then the image orientation has been improved by a bundle-adjustment. Results based on check points show, that the accuracy of the image orientation is better than 5&amp;thinsp;cm in X and Y coordinates.</p>


Author(s):  
E. Mitishita ◽  
F. Costa ◽  
J. Centeno

Abstract. Imagery and Lidar datasets have been used frequently to extract geoinformation. Datasets in the same mapping or geodetic frame is a fundamental condition for this application. Nowadays, Direct Sensor Orientation (DSO) can be considered as a mandatory technology to be used in the aerial photogrammetric survey. Although the DSO provides a high degree of automation process due to the GNSS/INS technologies, the accuracies of the obtained results from the imagery and Lidar surveys are dependent on the quality of a group of parameters that models accurately the user conditions of the system at the moment the job is performed. This paper shows the study that was performed to improve the tridimensional accuracies of the aerial imagery and Lidar datasets integration using the 3D photogrammetric intersection of single models (pairs of images) with Exterior Orientation Parameters (EOP) estimated from DSO. A Bundle Adjustment with additional parameters (BBA) of a small sub-block of images is used to refine the Interior Orientation Parameters (IOP) and EOP in the job condition. In the 3D photogrammetric intersection experiments using the proposed approach, the horizontal and vertical accuracies, estimated by the Root Mean Square Error (RMSE) of the 3D discrepancies from the Lidar checkpoints, increased around of 25% and 75% respectively.


Author(s):  
V. Mousavi ◽  
M. Varshosaz ◽  
F. Remondino

Abstract. Image orientation is a fundamental task in photogrammetric applications and it is performed by extracting keypoints with hand-crafted or learning-based methods, generating tie points among the images and running a bundle adjustment procedure. Nowadays, due to large number of extracted keypoints, tie point filtering approaches attempt to eliminate redundant tie points in order to increase accuracy and reduce processing time. This paper presents the results of an investigation concerning tie points impact on bundle adjustment results. Simulations and real data are processed in Australis and DBAT to evaluate different affecting factors, including tie point numbers, location accuracy, distribution and multiplicity. Achieved results show that increasing the amount of tie points improve the quality of bundle adjustment results, provided that the tie points are well-distributed on the image. Furthermore, bundle adjustment quality is improved as the multiplicity of tie points increases and their location uncertainty decrease. Based on simulation results, some suggestions for accurate tie points filtering in typical UAV photogrammetry blocks cases are derived.


2019 ◽  
Vol 2019 (1) ◽  
pp. 360-368
Author(s):  
Mekides Assefa Abebe ◽  
Jon Yngve Hardeberg

Different whiteboard image degradations highly reduce the legibility of pen-stroke content as well as the overall quality of the images. Consequently, different researchers addressed the problem through different image enhancement techniques. Most of the state-of-the-art approaches applied common image processing techniques such as background foreground segmentation, text extraction, contrast and color enhancements and white balancing. However, such types of conventional enhancement methods are incapable of recovering severely degraded pen-stroke contents and produce artifacts in the presence of complex pen-stroke illustrations. In order to surmount such problems, the authors have proposed a deep learning based solution. They have contributed a new whiteboard image data set and adopted two deep convolutional neural network architectures for whiteboard image quality enhancement applications. Their different evaluations of the trained models demonstrated their superior performances over the conventional methods.


2020 ◽  
Vol 2020 (4) ◽  
pp. 25-32
Author(s):  
Viktor Zheltov ◽  
Viktor Chembaev

The article has considered the calculation of the unified glare rating (UGR) based on the luminance spatial-angular distribution (LSAD). The method of local estimations of the Monte Carlo method is proposed as a method for modeling LSAD. On the basis of LSAD, it becomes possible to evaluate the quality of lighting by many criteria, including the generally accepted UGR. UGR allows preliminary assessment of the level of comfort for performing a visual task in a lighting system. A new method of "pixel-by-pixel" calculation of UGR based on LSAD is proposed.


2020 ◽  
Vol 33 (6) ◽  
pp. 838-844
Author(s):  
Jan-Helge Klingler ◽  
Ulrich Hubbe ◽  
Christoph Scholz ◽  
Florian Volz ◽  
Marc Hohenhaus ◽  
...  

OBJECTIVEIntraoperative 3D imaging and navigation is increasingly used for minimally invasive spine surgery. A novel, noninvasive patient tracker that is adhered as a mask on the skin for 3D navigation necessitates a larger intraoperative 3D image set for appropriate referencing. This enlarged 3D image data set can be acquired by a state-of-the-art 3D C-arm device that is equipped with a large flat-panel detector. However, the presumably associated higher radiation exposure to the patient has essentially not yet been investigated and is therefore the objective of this study.METHODSPatients were retrospectively included if a thoracolumbar 3D scan was performed intraoperatively between 2016 and 2019 using a 3D C-arm with a large 30 × 30–cm flat-panel detector (3D scan volume 4096 cm3) or a 3D C-arm with a smaller 20 × 20–cm flat-panel detector (3D scan volume 2097 cm3), and the dose area product was available for the 3D scan. Additionally, the fluoroscopy time and the number of fluoroscopic images per 3D scan, as well as the BMI of the patients, were recorded.RESULTSThe authors compared 62 intraoperative thoracolumbar 3D scans using the 3D C-arm with a large flat-panel detector and 12 3D scans using the 3D C-arm with a small flat-panel detector. Overall, the 3D C-arm with a large flat-panel detector required more fluoroscopic images per scan (mean 389.0 ± 8.4 vs 117.0 ± 4.6, p < 0.0001), leading to a significantly higher dose area product (mean 1028.6 ± 767.9 vs 457.1 ± 118.9 cGy × cm2, p = 0.0044).CONCLUSIONSThe novel, noninvasive patient tracker mask facilitates intraoperative 3D navigation while eliminating the need for an additional skin incision with detachment of the autochthonous muscles. However, the use of this patient tracker mask requires a larger intraoperative 3D image data set for accurate registration, resulting in a 2.25 times higher radiation exposure to the patient. The use of the patient tracker mask should thus be based on an individual decision, especially taking into considering the radiation exposure and extent of instrumentation.


Sign in / Sign up

Export Citation Format

Share Document