object space
Recently Published Documents


TOTAL DOCUMENTS

209
(FIVE YEARS 36)

H-INDEX

18
(FIVE YEARS 3)

2021 ◽  
Vol 13 (22) ◽  
pp. 4663
Author(s):  
Longhui Wang ◽  
Yan Zhang ◽  
Tao Wang ◽  
Yongsheng Zhang ◽  
Zhenchao Zhang ◽  
...  

Time delay and integration (TDI) charge-coupled device (CCD) is an image sensor for capturing images of moving objects at low light levels. This study examines the model construction of stitched TDI CCD original multi-slice images. The traditional approaches, for example, include the image-space-oriented algorithm and the object-space-oriented algorithm. The former indicates concise principles and high efficiency, whereas the panoramic stitching images lack the clear geometric relationships generated from the image-space-oriented algorithm. Similarly, even though the object-space-oriented algorithm generates an image with a clear geometric relationship, it is time-consuming due to the complicated and intensive computational demands. In this study, we developed a multi-slice satellite images stitching and geometric model construction method. The method consists of three major steps. First, the high-precision reference data assist in block adjustment and obtain the original slice image bias-corrected RFM to perform multi-slice image block adjustment. The second process generates the panoramic stitching image by establishing the image coordinate conversion relationship from the panoramic stitching image to the original multi-slice images. The final step is dividing the panoramic stitching image uniformly into image grids and employing the established image coordinate conversion relationship and the original multi-slice image bias-corrected RFM to generate a virtual control grid to construct the panoramic stitching image RFM. To evaluate the performance, we conducted experiments using the Tianhui-1(TH-1) high-resolution image and the Ziyuan-3(ZY-3) triple liner-array image data. The experimental results show that, compared with the object-space-oriented algorithm, the stitching accuracy loss of the generated panoramic stitching image was only 0.2 pixels and that the mean value was 0.799798 pixels, achieving the sub-pixel stitching requirements. Compared with the object-space-oriented algorithm, the RFM positioning difference of the panoramic stitching image was within 0.3 m, which achieves equal positioning accuracy.


2021 ◽  
Author(s):  
Taicheng Huang ◽  
Yiying Song ◽  
Jia Liu

Abstract Our mind can represent various objects from the physical world metaphorically into an abstract and complex high-dimensional object space, with a finite number of orthogonal axes encoding critical object features. However, little is known about what features serve as axes of the object space to critically affect object recognition. Here we asked whether the feature of objects’ real-world size constructed an axis of object space with deep convolutional neural networks (DCNNs) based on three criteria of sensitivity, independence and necessity that are impractical to be examined altogether with traditional approaches. A principal component analysis on features extracted by the DCNNs showed that objects’ real-world size was encoded by an independent axis, and the removal of this axis significantly impaired DCNN’s performance in recognizing objects. With a mutually-inspired paradigm of computational modeling and biological observation, we found that the shape of objects, rather than retinal size, co-occurrence, task demands and texture features, was necessary to represent the real-world size of objects for DCNNs and humans. In short, our study provided the first evidence supporting the feature of objects’ real-world size as an axis of object space, and devised a novel paradigm for future exploring the structure of object space.


2021 ◽  
pp. 102367
Author(s):  
Charles M. Rackson ◽  
Kyle M. Champley ◽  
Joseph T. Toombs ◽  
Erika J. Fong ◽  
Vishal Bansal ◽  
...  

2021 ◽  
Author(s):  
Taicheng Huang ◽  
Yiying Song ◽  
Jia Liu

Our mind can represent various objects from the physical world metaphorically into an abstract and complex high-dimensional object space, with a finite number of orthogonal axes encoding critical object features. Previous fMRI studies have shown that the middle fusiform sulcus in the ventral temporal cortex separates the real-world small-size map from the large-size map. Here we asked whether the feature of objects' real-world size constructed an axis of object space with deep convolutional neural networks (DCNNs) based on three criteria of sensitivity, independence and necessity that are impractical to be examined altogether with traditional approaches. A principal component analysis on features extracted by the DCNNs showed that objects' real-world size was encoded by an independent component, and the removal of this component significantly impaired DCNN's performance in recognizing objects. By manipulating stimuli, we found that the shape and texture of objects, rather than retina size, co-occurrence and task demands, accounted for the representation of the real-world size in the DCNNs. A follow-up fMRI experiment on humans further demonstrated that the shape, but not the texture, was used to infer the real-world size of objects in humans. In short, with both computational modeling and empirical human experiments, our study provided the first evidence supporting the feature of objects' real-world size as an axis of object space, and devised a novel paradigm for future exploring the structure of object space.


2021 ◽  
Author(s):  
Yiyuan Zhang ◽  
Ke Zhou ◽  
Pinglei Bao ◽  
Jia Liu

To achieve the computational goal of rapidly recognizing miscellaneous objects in the environment despite large variations in their appearance, our mind represents objects in a high-dimensional object space to provide separable category information and enable the extraction of different kinds of information necessary for various levels of the visual processing. To implement this abstract and complex object space, the ventral temporal cortex (VTC) develops different object-selective regions with a certain topological organization as the physical substrate. However, the principle that governs the topological organization of object selectivities in the VTC remains unclear. Here, equipped with the wiring cost minimization principle constrained by the wiring length of neurons in the human temporal lobe, we constructed a hybrid self-organizing map (SOM) model as an artificial VTC (VTC-SOM) to explain how the abstract and complex object space is faithfully implemented in the brain. In two in silico experiments with the empirical brain imaging and single-unit data, our VTC-SOM predicted the topological structure of fine-scale functional regions (face-, object-, body-, and place-selective regions) and the boundary (i.e., middle Fusiform Sulcus) in large-scale abstract functional maps (animate vs. inanimate, real-word large-size vs. small-size, central vs. peripheral), with no significant loss in functionality (e.g., categorical selectivity, a hierarchy of view-invariant representations). These findings illustrated that the simple principle utilized in our model, rather than multiple hypotheses such as temporal associations, conceptual knowledge, and computational demands together, was apparently sufficient to determine the topological organization of object-selectivities in the VTC. In this way, the high-dimensional object space is implemented in a two-dimensional cortical surface of the brain faithfully.


Author(s):  
M. Maboudi ◽  
A. Elbillehy ◽  
Y. Ghassoun ◽  
M. Gerke

Abstract. Accurate image-based measurement based on UAV data is attracting attention in various applications. While the external accuracy of the UAV image blocks could be mainly affected by object-space information like number and distribution of ground control points and RTK-GNSS accuracy, its internal accuracy highly depends on camera specifications, flight height, data capturing setup and accuracy of scale estimation. For many small-scale projects accurate local measurements are highly demanded. This necessitates high internal accuracy of images block which could be transferred from model space to object space by accurate estimation of the scale parameter. This research aims at improving the internal accuracy of UAV image blocks using low-altitude flight(s) over small parts of the project area without using any ground control points. Possible further improvement by using calibrated scale-bars which serve as scale-constraints is also investigated. To this end, different scenarios of the flight configuration and distance measurements in the two photogrammetric blocks are also considered and the results are analyzed. Our investigations show 50% accuracy improvement achieved by performing local flights over small parts of the scene, given that RTK information is available. Moreover, adding accurate scale-bars increased the accuracy improvement to 67%. Furthermore, when RTK information is not available, adding local low-altitude flights and scale-bars decrease the error of local distance measurement form 1–3 meters to less than 4 centimeters.


Author(s):  
H. Hastedt ◽  
T. Luhmann ◽  
H.-J. Przybilla ◽  
R. Rofallski

Abstract. For optical 3D measurements in close-range and UAV applications, the modelling of interior orientation is of superior importance in order to subsequently allow for high precision and accuracy in geometric 3D reconstruction. Nowadays, modern camera systems are often used for optical 3D measurements due to UAV payloads and economic purposes. They are constructed of aspheric and spherical lens combinations and include image pre-processing like low-pass filtering or internal distortion corrections that may lead to effects in image space not being considered with the standard interior orientation models. With a variety of structure-from-motion (SfM) data sets, four typical systematic patterns of residuals could be observed. These investigations focus on the evaluation of interior orientation modelling with respect to minimising systematics given in image space after bundle adjustment. The influences are evaluated with respect to interior and exterior orientation parameter changes and their correlations as well as the impact in object space. With the variety of data sets, camera/lens/platform configurations and pre-processing influences, these investigations indicate a number of different behaviours. Some specific advices in the usage of extended interior orientation models, like Fourier series, could be derived for a selection of the data sets. Significant reductions of image space systematics are achieved. Even though increasing standard deviations and correlations for the interior orientation parameters are a consequence, improvements in object space precision and image space reliability could be reached.


2021 ◽  
Vol 7 (6) ◽  
pp. 96
Author(s):  
Alessandro Rossi ◽  
Marco Barbiero ◽  
Paolo Scremin ◽  
Ruggero Carli

Industrial 3D models are usually characterized by a large number of hidden faces and it is very important to simplify them. Visible-surface determination methods provide one of the most common solutions to the visibility problem. This study presents a robust technique to address the global visibility problem in object space that guarantees theoretical convergence to the optimal result. More specifically, we propose a strategy that, in a finite number of steps, determines if each face of the mesh is globally visible or not. The proposed method is based on the use of Plücker coordinates that allows it to provide an efficient way to determine the intersection between a ray and a triangle. This algorithm does not require pre-calculations such as estimating the normal at each face: this implies the resilience to normals orientation. We compared the performance of the proposed algorithm against a state-of-the-art technique. Results showed that our approach is more robust in terms of convergence to the maximum lossless compression.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ka Zhang ◽  
Wen Xiao ◽  
Yehua Sheng ◽  
Junshu Wang ◽  
Shan Zhang ◽  
...  

AbstractIn aerial multi-view photogrammetry, whether there is a special positional distribution pattern among candidate homologous pixels of a matching pixel in the multi-view images? If so, can this positional pattern be used to precisely confirm the real homologous pixels? These problems have not been studied at present. Therefore, the study of the positional distribution pattern among candidate homologous pixels based on the adjustment theory in surveying is investigated in this paper. Firstly, the definition and computing method of pixel’s pseudo object-space coordinates are given, which can transform the problem of multi-view matching for confirming real homologous pixels into the problem of surveying adjustment for computing the pseudo object-space coordinates of the matching pixel. Secondly, according to the surveying adjustment theory, the standardized residual of each candidate homologous pixel of the matching pixel is figured out, and the positional distribution pattern among these candidate pixels is theoretically inferred utilizing the quantitative index of standardized residual. Lastly, actual aerial images acquired by different sensors are used to carry out experimental verification of the theoretical inference. Experimental results prove not only that there is a specific positional distribution pattern among candidate homologous pixels, but also that this positional distribution pattern can be used to develop a new object-side multi-view image matching method. The proposed study has an important reference value on resolving the defects of existing image-side multi-view matching methods at the mechanism level.


2021 ◽  
Vol 87 (5) ◽  
pp. 375-384
Author(s):  
Letícia Ferrari Castanheiro ◽  
Antonio Maria Garcia Tommaselli ◽  
Adilson Berveglieri ◽  
Mariana Batista Campos ◽  
José Marcato Junior

Omnidirectional systems composed of two hyperhemispherical lenses (dual-fish-eye systems) are gaining popularity, but only a few works have studied suitable models for hyperhemispherical lenses and dual-fish-eye calibration. In addition, the effects of using points in the hyperhemispherical field of view in photogrammetric procedures have not been addressed. This article presents a comparative analysis of the fish-eye models (equidistant, equisolid-angle, stereographic, and orthogonal) for hyperhemispherical-lens and dual-fish-eye calibration techniques. The effects of adding points beyond 180° field of view in dual-fish-eye calibration using stability constraints of relative orientation parameters are also assessed. The experiments were performed with the Ricoh Theta dual-fish-eye system, which is composed of fish-eye lenses with a field of view of approximately 190° each. The equisolid-angle model presented the best results in the simultaneous calibration experiments. An accuracy of approximately one pixel in the object space units was achieved, showing the potential of the proposed approach for close-range applications.


Sign in / Sign up

Export Citation Format

Share Document