Mutual noncollinear mapping of object space and visible-image space

2008 ◽  
Vol 75 (3) ◽  
pp. 166
Author(s):  
G. K. Potapova ◽  
M. A. Moskalenko

Author(s):  
S. Rhee ◽  
T. Kim

3D spatial information from unmanned aerial vehicles (UAV) images is usually provided in the form of 3D point clouds. For various UAV applications, it is important to generate dense 3D point clouds automatically from over the entire extent of UAV images. In this paper, we aim to apply image matching for generation of local point clouds over a pair or group of images and global optimization to combine local point clouds over the whole region of interest. We tried to apply two types of image matching, an object space-based matching technique and an image space-based matching technique, and to compare the performance of the two techniques. The object space-based matching used here sets a list of candidate height values for a fixed horizontal position in the object space. For each height, its corresponding image point is calculated and similarity is measured by grey-level correlation. The image space-based matching used here is a modified relaxation matching. We devised a global optimization scheme for finding optimal pairs (or groups) to apply image matching, defining local match region in image- or object- space, and merging local point clouds into a global one. For optimal pair selection, tiepoints among images were extracted and stereo coverage network was defined by forming a maximum spanning tree using the tiepoints. From experiments, we confirmed that through image matching and global optimization, 3D point clouds were generated successfully. However, results also revealed some limitations. In case of image-based matching results, we observed some blanks in 3D point clouds. In case of object space-based matching results, we observed more blunders than image-based matching ones and noisy local height variations. We suspect these might be due to inaccurate orientation parameters. The work in this paper is still ongoing. We will further test our approach with more precise orientation parameters.



Author(s):  
H. Hastedt ◽  
T. Luhmann ◽  
H.-J. Przybilla ◽  
R. Rofallski

Abstract. For optical 3D measurements in close-range and UAV applications, the modelling of interior orientation is of superior importance in order to subsequently allow for high precision and accuracy in geometric 3D reconstruction. Nowadays, modern camera systems are often used for optical 3D measurements due to UAV payloads and economic purposes. They are constructed of aspheric and spherical lens combinations and include image pre-processing like low-pass filtering or internal distortion corrections that may lead to effects in image space not being considered with the standard interior orientation models. With a variety of structure-from-motion (SfM) data sets, four typical systematic patterns of residuals could be observed. These investigations focus on the evaluation of interior orientation modelling with respect to minimising systematics given in image space after bundle adjustment. The influences are evaluated with respect to interior and exterior orientation parameter changes and their correlations as well as the impact in object space. With the variety of data sets, camera/lens/platform configurations and pre-processing influences, these investigations indicate a number of different behaviours. Some specific advices in the usage of extended interior orientation models, like Fourier series, could be derived for a selection of the data sets. Significant reductions of image space systematics are achieved. Even though increasing standard deviations and correlations for the interior orientation parameters are a consequence, improvements in object space precision and image space reliability could be reached.



2021 ◽  
Vol 13 (22) ◽  
pp. 4663
Author(s):  
Longhui Wang ◽  
Yan Zhang ◽  
Tao Wang ◽  
Yongsheng Zhang ◽  
Zhenchao Zhang ◽  
...  

Time delay and integration (TDI) charge-coupled device (CCD) is an image sensor for capturing images of moving objects at low light levels. This study examines the model construction of stitched TDI CCD original multi-slice images. The traditional approaches, for example, include the image-space-oriented algorithm and the object-space-oriented algorithm. The former indicates concise principles and high efficiency, whereas the panoramic stitching images lack the clear geometric relationships generated from the image-space-oriented algorithm. Similarly, even though the object-space-oriented algorithm generates an image with a clear geometric relationship, it is time-consuming due to the complicated and intensive computational demands. In this study, we developed a multi-slice satellite images stitching and geometric model construction method. The method consists of three major steps. First, the high-precision reference data assist in block adjustment and obtain the original slice image bias-corrected RFM to perform multi-slice image block adjustment. The second process generates the panoramic stitching image by establishing the image coordinate conversion relationship from the panoramic stitching image to the original multi-slice images. The final step is dividing the panoramic stitching image uniformly into image grids and employing the established image coordinate conversion relationship and the original multi-slice image bias-corrected RFM to generate a virtual control grid to construct the panoramic stitching image RFM. To evaluate the performance, we conducted experiments using the Tianhui-1(TH-1) high-resolution image and the Ziyuan-3(ZY-3) triple liner-array image data. The experimental results show that, compared with the object-space-oriented algorithm, the stitching accuracy loss of the generated panoramic stitching image was only 0.2 pixels and that the mean value was 0.799798 pixels, achieving the sub-pixel stitching requirements. Compared with the object-space-oriented algorithm, the RFM positioning difference of the panoramic stitching image was within 0.3 m, which achieves equal positioning accuracy.



Author(s):  
G. Ye ◽  
J. Pan ◽  
Y. Zhu ◽  
S. Jin

Abstract. Satellite jitter is a random error source which leads to image degradation. This paper proposes a method to detect the time-variant jitter using multispectral images. In the method, multispectral images are adopted for their large overlap to obtain the parallax map. The imaging process is analyzed in details, and an integration imaging model is constructed, which takes fully into account the time-variant jitter property and builds the relationship between object space with image space. Besides, multispectral images of ZY-3 satellite were used for experiments, and results show that the presented method obtains the jitter curve with the error of amplitude, frequency and phase not more than 0.0591 px, 0.0006 Hz and 0.007 rad, respectively. Results demonstrate the performance of the presented method in jitter detection.



Author(s):  
S. Rhee ◽  
T. Kim

3D spatial information from unmanned aerial vehicles (UAV) images is usually provided in the form of 3D point clouds. For various UAV applications, it is important to generate dense 3D point clouds automatically from over the entire extent of UAV images. In this paper, we aim to apply image matching for generation of local point clouds over a pair or group of images and global optimization to combine local point clouds over the whole region of interest. We tried to apply two types of image matching, an object space-based matching technique and an image space-based matching technique, and to compare the performance of the two techniques. The object space-based matching used here sets a list of candidate height values for a fixed horizontal position in the object space. For each height, its corresponding image point is calculated and similarity is measured by grey-level correlation. The image space-based matching used here is a modified relaxation matching. We devised a global optimization scheme for finding optimal pairs (or groups) to apply image matching, defining local match region in image- or object- space, and merging local point clouds into a global one. For optimal pair selection, tiepoints among images were extracted and stereo coverage network was defined by forming a maximum spanning tree using the tiepoints. From experiments, we confirmed that through image matching and global optimization, 3D point clouds were generated successfully. However, results also revealed some limitations. In case of image-based matching results, we observed some blanks in 3D point clouds. In case of object space-based matching results, we observed more blunders than image-based matching ones and noisy local height variations. We suspect these might be due to inaccurate orientation parameters. The work in this paper is still ongoing. We will further test our approach with more precise orientation parameters.



2008 ◽  
Vol 74 (12) ◽  
pp. 1521-1528 ◽  
Author(s):  
Jen-Jer Jaw ◽  
Nei-Hao Perny


Author(s):  
T. Yanaka ◽  
K. Shirota

It is significant to note field aberrations (chromatic field aberration, coma, astigmatism and blurring due to curvature of field, defined by Glaser's aberration theory relative to the Blenden Freien System) of the objective lens in connection with the following three points of view; field aberrations increase as the resolution of the axial point improves by increasing the lens excitation (k2) and decreasing the half width value (d) of the axial lens field distribution; when one or all of the imaging lenses have axial imperfections such as beam deflection in image space by the asymmetrical magnetic leakage flux, the apparent axial point has field aberrations which prevent the theoretical resolution limit from being obtained.



Author(s):  
W.J.T. Mitchell

Przekład tekstu "Image, Space, and Rewolution. The Arts of Occupation", który ukazał się w "Critical Inquiry" (2012, nr 39)



Sign in / Sign up

Export Citation Format

Share Document