camera orientation
Recently Published Documents


TOTAL DOCUMENTS

102
(FIVE YEARS 24)

H-INDEX

10
(FIVE YEARS 2)

2021 ◽  
Vol 47 (3) ◽  
pp. 111-117
Author(s):  
Szymon Sobura

The paper deals with the calibration of a non-metric digital camera Nikon EOS 6D with a 50 mm lens that could be adapted as a potential UAV sensor for the purposes of aerial inspections. The determination of the internal orientation parameters and the image errors of the non-metric digital camera involved self-calibration with Agisoft Metashape software solving the network of the images obtained from different test fields: a chessboard field, a professional laboratory field and a spatially diverse research area. The results of the control measurement for the examined object distance of 6 meters do not differ significantly. The RMSE from the control measurement for the second analyzed object distance of 15 meters was calculated on the basis of the internal orientation elements. The images from the laboratory field, the spatial test area and the chessboard field were used, and the obtained results amounted to 7.9, 9.9 and 11.5 mm, respectively. The conducted studies showed that in the case of very precise photogrammetric measurements performed by means of the Nikon EOS 6D camera equipped with a 50 mm lens, it is optimal to conduct calibration in a laboratory test field. The greatest RMSE errors were recorded for the control images with the elements of the internal camera orientation calculated on the basis of the chessboard area. The results of the experiments clearly show a relation between the accuracy of the Nikon EOS 6D camera calibrations and the percentage of the frame area filled with the test field. This explains why the weakest calibration results were obtained from the chessboard test field.


2021 ◽  
Vol 10 (8) ◽  
pp. 537
Author(s):  
Zhibin Pan ◽  
Jin Tang ◽  
Tardi Tjahjadi ◽  
Fan Guo

Localization method based on skyline for visual geo-location is an important auxiliary localization method that does not use a satellite positioning system. Due to the computational complexity, existing panoramic skyline localization methods determine a small area using prior knowledge or auxiliary sensors. After correcting the camera orientation using inertial navigation sensors, a fine position is achieved via the skyline. In this paper, a new panoramic skyline localization method is proposed that involves the following. By clustering the sampling points in the location area and improving the existing retrieval method, the computing efficiency of the panoramic skyline localization is increased by fourfold. Furthermore, the camera orientation is estimated accurately from the terrain features in the image. Experimental results show that the proposed method achieves higher localization accuracy and requires less computation for a large area without the aid of external sensors.


Author(s):  
D. J. Regner ◽  
J. D. Salazar ◽  
P. V. Buschinelli ◽  
M. Machado ◽  
D. Oliveira ◽  
...  

Abstract. This work describes a control solution for real time object tracking in images acquired for a RPAS on an object inspection environment. This, controlling a 3-axis gimbal mechanism to control a camera orientation embedded to a RPAS, using its image processed for feedback. The objective of control is to maintain the target of interest at the center of the image plane. The proposed solution uses a YOLOv3 object detection model in order to detect the target object and determine, thru rotation matrices, the new desired angles to converge the object’s position to the center of the image. To compare results of the proposed control, a linear control was tuned using a linear PI algorithm. Simulation and practice experiments successfully tracked the desired object in real time using YOLOv3 in both control approaches presented.


Author(s):  
T. Meyer ◽  
A. Brunn ◽  
U. Stilla

Abstract. Construction progress documentation is currently of great interest for the AEC (Architecture, Engineering and Construction) branch and BIM (Building Information Modeling). Subject of this work is the geometric accuracy assessment of image-based change detection in indoor environments based on a BIM. Line features usually serve well as geodetic references in indoor scenes in order to solve for camera orientation. However, building edges are never perfectly built as planned and often geometrically generalized for BIM compliant representation. As a result, in this approach, line correspondences for image-to-model co-registration are considered as statistically uncertain entities as this is essential for dealing with metric confidences in the field of civil engineering and BIM. We present an estimation model for camera pose refinement which is based on the incidence condition between model edges and corresponding image lines. Geometric accuracies are assigned to the model edges according to the Level of Accuracy (LOA) specification for BIM. The approach is demonstrated in a series of tests using a synthetic image of an indoor BIM. The effects of varying edge detection accuracies on the estimation are investigated as well as the effects of using model edges with different geometric quality by adding Gaussian noise to the synthetic observations, each within 100 simulation runs. The results show that the camera orientation can be improved with the presented estimation model as long as the BIM compliant references meet the conditions of LOA 30 or higher (σ < 7.5 mm).


2021 ◽  
Vol 310 ◽  
pp. 04004
Author(s):  
Vladimir Bezmenov

The angular elements of external orientation characterize the position of a shooting camera relative to the coordinate system in which the spatial coordinates of the points of the object under study are determined from the processing of its images. In many cases of aerial photography, e.g. shooting from an unmanned aerial vehicle, as well as in the case of space survey, the values of the orientation angles could be very significant. This paper presents the results of numerical experiments for five different systems of external orientation angles (Euler angles). The studies were performed using the condition of complanarity, which is the basis of space forward intersection. For a space forward intersection, a model of errors in determining spatial coordinates for five systems of shooting camera orientation angles has been developed. In the numerical experiments, the general case of aerial photography from an unmanned aerial vehicle and of space survey of the Earth were simulated. By comparing the root-mean-square errors (RMSE) in determining the spatial coordinates obtained using the studied systems of external orientation angles, the features of the use of these systems of orientation angles were revealed. The results of the research will allow to determine the spatial coordinates of the points of the studied objects with a greater degree of reliability by photogrammetry methods.


Author(s):  
L. F. Castanheiro ◽  
A. M. G. Tommaselli ◽  
M. B. Campos ◽  
A. Berveglieri ◽  
G. Santos

Abstract. This paper presents a feasibility study on the use of omnidirectional systems for 3D modelling of agricultural crops, aiming a systematic monitoring. Omnidirectional systems with multiple sensors have been widely used in close-range photogrammetry (CRP), which can be a good alternative to provide data for digital agriculture management. The GoPro Fusion dual-camera is the omnidirectional system used in this work. This system is composed of two cameras with fisheye lenses that cover more than 180° each one in back-to-back position. System calibration, camera orientation and 3D reconstruction of an agricultural cultivated area were performed in Agisoft Metashape software. A 360° calibration field based on coded targets (CTs) from Agisoft Metashape software was used to calibrate the omnidirectional system. The 3D reconstruction of an orange orchard was performed using fisheye images taken with GoPro Fusion. The results show the potential of using an omnidirectional system for 3D modelling in agricultural crops, in particular citrus trees. Interior orientation parameters (IOPs) was estimated using Agisoft Metashape target/software with a precision of 9 mm. A 3D reconstruction model of the orange orchard area was obtained with an accuracy of 3.8 cm, which can be considered acceptable for agricultural purposes.


2020 ◽  
Vol 1682 ◽  
pp. 012035
Author(s):  
X Zou ◽  
H Y Xu ◽  
K Shi ◽  
X B Fang

2020 ◽  
Vol 12 (16) ◽  
pp. 2600
Author(s):  
Jyun-Ping Jhan ◽  
Jiann-Yeou Rau ◽  
Chih-Ming Chou

The Zengwen desilting tunnel project installed an Elephant Trunk Steel Pipe (ETSP) at the bottom of the reservoir that is designed to connect the new bypass tunnel and reach downward to the sediment surface. Since ETSP is huge and its underwater installation is an unprecedented construction method, there are several uncertainties in its dynamic motion changes during installation. To assure construction safety, a 1:20 ETSP scale model was built to simulate the underwater installation procedure, and its six-degrees-of-freedom (6-DOF) motion parameters were monitored by offline underwater 3D rigid object tracking and photogrammetry. Three cameras were used to form a multicamera system, and several auxiliary devices—such as waterproof housing, tripods, and a waterproof LED—were adopted to protect the cameras and to obtain clear images in the underwater environment. However, since it is difficult for the divers to position the camera and ensure the camera field of view overlap, each camera can only observe the head, middle, and tail parts of ETSP, respectively, leading to a small overlap area among all images. Therefore, it is not possible to perform a traditional method via multiple images forward intersection, where the camera’s positions and orientations have to be calibrated and fixed in advance. Instead, by tracking the 3D coordinates of ETSP and obtaining the camera orientation information via space resection, we propose a multicamera coordinate transformation and adopted a single-camera relative orientation transformation to calculate the 6-DOF motion parameters. The offline procedure is to first acquire the 3D coordinates of ETSP by taking multiposition images with a precalibrated camera in the air and then use the 3D coordinates as control points to perform the space resection of the calibrated underwater cameras. Finally, we calculated the 6-DOF of ETSP by using the camera orientation information through both multi- and single-camera approaches. In this study, we show the results of camera calibration in the air and underwater environment, present the 6-DOF motion parameters of ETSP underwater installation and the reconstructed 4D animation, and compare the differences between the multi- and single-camera approaches.


Sign in / Sign up

Export Citation Format

Share Document