Estimation of Camera Extrinsic Parameters of Indoor Omni-Directional Images Acquired by a Rotating Line Camera

Author(s):  
Sojung Oh ◽  
Impyeong Lee
Keyword(s):  
2018 ◽  
Vol 10 (8) ◽  
pp. 1298 ◽  
Author(s):  
Lei Yin ◽  
Xiangjun Wang ◽  
Yubo Ni ◽  
Kai Zhou ◽  
Jilong Zhang

Multi-camera systems are widely used in the fields of airborne remote sensing and unmanned aerial vehicle imaging. The measurement precision of these systems depends on the accuracy of the extrinsic parameters. Therefore, it is important to accurately calibrate the extrinsic parameters between the onboard cameras. Unlike conventional multi-camera calibration methods with a common field of view (FOV), multi-camera calibration without overlapping FOVs has certain difficulties. In this paper, we propose a calibration method for a multi-camera system without common FOVs, which is used on aero photogrammetry. First, the extrinsic parameters of any two cameras in a multi-camera system is calibrated, and the extrinsic matrix is optimized by the re-projection error. Then, the extrinsic parameters of each camera are unified to the system reference coordinate system by using the global optimization method. A simulation experiment and a physical verification experiment are designed for the theoretical arithmetic. The experimental results show that this method is operable. The rotation error angle of the camera’s extrinsic parameters is less than 0.001rad and the translation error is less than 0.08 mm.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4643
Author(s):  
Sang Jun Lee ◽  
Jeawoo Lee ◽  
Wonju Lee ◽  
Cheolhun Jang

In intelligent vehicles, extrinsic camera calibration is preferable to be conducted on a regular basis to deal with unpredictable mechanical changes or variations on weight load distribution. Specifically, high-precision extrinsic parameters between the camera coordinate and the world coordinate are essential to implement high-level functions in intelligent vehicles such as distance estimation and lane departure warning. However, conventional calibration methods, which solve a Perspective-n-Point problem, require laborious work to measure the positions of 3D points in the world coordinate. To reduce this inconvenience, this paper proposes an automatic camera calibration method based on 3D reconstruction. The main contribution of this paper is a novel reconstruction method to recover 3D points on planes perpendicular to the ground. The proposed method jointly optimizes reprojection errors of image features projected from multiple planar surfaces, and finally, it significantly reduces errors in camera extrinsic parameters. Experiments were conducted in synthetic simulation and real calibration environments to demonstrate the effectiveness of the proposed method.


Sensors ◽  
2020 ◽  
Vol 20 (10) ◽  
pp. 2841
Author(s):  
Mohammad Ali Zaiter ◽  
Régis Lherbier ◽  
Ghaleb Faour ◽  
Oussama Bazzi ◽  
Jean-Charles Noyer

This paper details a new extrinsic calibration method for scanning laser rangefinder that is precisely focused on the geometrical ground plane-based estimation. This method is also efficient in the challenging experimental configuration of a high angle of inclination of the LiDAR. In this configuration, the calibration of the LiDAR sensor is a key problem that can be be found in various domains and in particular to guarantee the efficiency of ground surface object detection. The proposed extrinsic calibration method can be summarized by the following procedure steps: fitting ground plane, extrinsic parameters estimation (3D orientation angles and altitude), and extrinsic parameters optimization. Finally, the results are presented in terms of precision and robustness against the variation of LiDAR’s orientation and range accuracy, respectively, showing the stability and the accuracy of the proposed extrinsic calibration method, which was validated through numerical simulation and real data to prove the method performance.


Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5082 ◽  
Author(s):  
Zhang ◽  
Huang ◽  
Zhao

With extensive application of RGB-D cameras in robotics, computer vision, and many other fields, accurate calibration becomes more and more critical to the sensors. However, most existing models for calibrating depth and the relative pose between a depth camera and an RGB camera are not universally applicable to many different kinds of RGB-D cameras. In this paper, by using the collinear equation and space resection of photogrammetry, we present a new model to correct the depth and calibrate the relative pose between depth and RGB cameras based on a 3D control field. We establish a rigorous relationship model between the two cameras; then, we optimize the relative parameters of two cameras by least-squares iteration. For depth correction, based on the extrinsic parameters related to object space, the reference depths are calculated by using a collinear equation. Then, we calibrate the depth measurements with consideration of the distortion of pixels in depth images. We apply Kinect-2 to verify the calibration parameters by registering depth and color images. We test the effect of depth correction based on 3D reconstruction. Compared to the registration results from a state-of-the-art calibration model, the registration results obtained with our calibration parameters improve dramatically. Likewise, the performances of 3D reconstruction demonstrate obvious improvements after depth correction.


2017 ◽  
Vol 29 (3) ◽  
pp. 314-329 ◽  
Author(s):  
Ge Wu ◽  
Duan Li ◽  
Yueqi Zhong ◽  
PengPeng Hu

Purpose The calibration is a key but cumbersome process for 3D body scanning using multiple depth cameras. The purpose of this paper is to simplify the calibration process by introducing a new method to calibrate the extrinsic parameters of multiple depth cameras simultaneously. Design/methodology/approach An improved method is introduced to enhance the accuracy based on the virtual checkerboards. Laplace coordinates are employed for a point-to-point adjustment to increase the accuracy of scanned data. A system with eight depth cameras is developed for full-body scanning, and the performance of this system is verified by actual results. Findings The agreement of measurements between scanned human bodies and the real subjects demonstrates the accuracy of the proposed method. The entire calibration process is automatic. Originality/value A complete algorithm for a full human body scanning system is introduced in this paper. This is the first publically study on the refinement and the point-by-point adjustment based on the virtual checkerboards toward the scanning accuracy enhancement.


2019 ◽  
Vol 9 (18) ◽  
pp. 3729 ◽  
Author(s):  
Bao ◽  
Tan ◽  
Liu ◽  
Miao

A computer vision method for measuring multiple pointer meters is proposed based on the inverse perspective mapping. First, the measured meter scales are used as the calibration objects to obtain the extrinsic parameters of the meter plane. Second, normal vector of the meter plane can be acquired by the extrinsic parameters, obtaining the rotation transformation matrix of the meter image. Then, the acquired meter image is transformed to a position both parallel to the meter plane and near the main point by the rotation transformation matrix and the extrinsic parameters, eliminating the perspective effect of the acquired image. Finally, the transformed image is tested by the visual detection method to obtain the readings of the pointer meter, improving measurement precision. The results of the measurement verify the effectiveness and accuracy of the method.


Sign in / Sign up

Export Citation Format

Share Document