Investigation of Factors Influencing Calibration Accuracy of Camera

2013 ◽  
Vol 712-715 ◽  
pp. 2331-2335
Author(s):  
Jian Hua Wang ◽  
Yu Ping Wu ◽  
Zhao Yang

Camera calibration is the basis of vision-based 3D measurement. While many calibration methods have been proposed, the problem encountered in the practice of camera calibration is how to get accurate calibration parameters, which is seldom involved in references. This paper is focused on investigation of main factors influencing calibration accuracy, including manufacturing error of calibration rig, extracting error of control point and their combination. Based on the popular calibration method, simulation experiments are conducted at different error level, and the results show that the extracting error of control point has greater effect on calibration accuracy than manufacturing error of calibration rig. The manufacturing tolerance of calibration rig and extracting tolerance of control point is suggested to satisfy usual machine vision application.

2011 ◽  
Vol 411 ◽  
pp. 602-608 ◽  
Author(s):  
Xiang Kui Jiang

In this paper,an improved genetic algorithm was proposed,which is applicable to binocular camera calibration. On the one hand, conventional encoding method is improved so that variable search interval can be adjusted adaptively. On the other hand, crossover and mutation probability is varied by using superiority inheritance principle to avoid premature question. Experimental results show that the proposed method has a higher calibration accuracy and better robustness, compared to those of non-linear calibration methods. The proposed method is able to improve the performance of global optimization effectively.


2018 ◽  
Vol 10 (8) ◽  
pp. 1298 ◽  
Author(s):  
Lei Yin ◽  
Xiangjun Wang ◽  
Yubo Ni ◽  
Kai Zhou ◽  
Jilong Zhang

Multi-camera systems are widely used in the fields of airborne remote sensing and unmanned aerial vehicle imaging. The measurement precision of these systems depends on the accuracy of the extrinsic parameters. Therefore, it is important to accurately calibrate the extrinsic parameters between the onboard cameras. Unlike conventional multi-camera calibration methods with a common field of view (FOV), multi-camera calibration without overlapping FOVs has certain difficulties. In this paper, we propose a calibration method for a multi-camera system without common FOVs, which is used on aero photogrammetry. First, the extrinsic parameters of any two cameras in a multi-camera system is calibrated, and the extrinsic matrix is optimized by the re-projection error. Then, the extrinsic parameters of each camera are unified to the system reference coordinate system by using the global optimization method. A simulation experiment and a physical verification experiment are designed for the theoretical arithmetic. The experimental results show that this method is operable. The rotation error angle of the camera’s extrinsic parameters is less than 0.001rad and the translation error is less than 0.08 mm.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4643
Author(s):  
Sang Jun Lee ◽  
Jeawoo Lee ◽  
Wonju Lee ◽  
Cheolhun Jang

In intelligent vehicles, extrinsic camera calibration is preferable to be conducted on a regular basis to deal with unpredictable mechanical changes or variations on weight load distribution. Specifically, high-precision extrinsic parameters between the camera coordinate and the world coordinate are essential to implement high-level functions in intelligent vehicles such as distance estimation and lane departure warning. However, conventional calibration methods, which solve a Perspective-n-Point problem, require laborious work to measure the positions of 3D points in the world coordinate. To reduce this inconvenience, this paper proposes an automatic camera calibration method based on 3D reconstruction. The main contribution of this paper is a novel reconstruction method to recover 3D points on planes perpendicular to the ground. The proposed method jointly optimizes reprojection errors of image features projected from multiple planar surfaces, and finally, it significantly reduces errors in camera extrinsic parameters. Experiments were conducted in synthetic simulation and real calibration environments to demonstrate the effectiveness of the proposed method.


Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6717
Author(s):  
Yunfeng Ran ◽  
Qixin He ◽  
Qibo Feng ◽  
Jianying Cui

Line-structured light has been widely used in the field of railway measurement, owing to its high capability of anti-interference, fast scanning speed and high accuracy. Traditional calibration methods of line-structured light sensors have the disadvantages of long calibration time and complicated calibration process, which is not suitable for railway field application. In this paper, a fast calibration method based on a self-developed calibration device was proposed. Compared with traditional methods, the calibration process is simplified and the calibration time is greatly shortened. This method does not need to extract light strips; thus, the influence of ambient light on the measurement is reduced. In addition, the calibration error resulting from the misalignment was corrected by epipolar constraint, and the calibration accuracy was improved. Calibration experiments in laboratory and field tests were conducted to verify the effectiveness of this method, and the results showed that the proposed method can achieve a better calibration accuracy compared to a traditional calibration method based on Zhang’s method.


2022 ◽  
pp. 1-20
Author(s):  
Shiyu Bai ◽  
Jizhou Lai ◽  
Pin Lyu ◽  
Yiting Cen ◽  
Bingqing Wang ◽  
...  

Determination of calibration parameters is essential for the fusion performance of an inertial measurement unit (IMU) and odometer integrated navigation system. Traditional calibration methods are commonly based on the filter frame, which limits the improvement of the calibration accuracy. This paper proposes a graph-optimisation-based self-calibration method for the IMU/odometer using preintegration theory. Different from existing preintegrations, the complete IMU/odometer preintegration model is derived, which takes into consideration the effects of the scale factor of the odometer, and misalignments in the attitude and position between the IMU and odometer. Then the calibration is implemented by the graph-optimisation method. The KITTI dataset and field experimental tests are carried out to evaluate the effectiveness of the proposed method. The results illustrate that the proposed method outperforms the filter-based calibration method. Meanwhile, the performance of the proposed IMU/odometer preintegration model is optimal compared with the traditional preintegration models.


Sensors ◽  
2020 ◽  
Vol 20 (13) ◽  
pp. 3695 ◽  
Author(s):  
Carlos Ricolfe-Viala ◽  
Alicia Esparza

Accurate correction of high distorted images is a very complex problem. Several lens distortion models exist that are adjusted using different techniques. Usually, regardless of the chosen model, a unique distortion model is adjusted to undistort images and the camera-calibration template distance is not considered. Several authors have presented the depth dependency of lens distortion but none of them have treated it with highly distorted images. This paper presents an analysis of the distortion depth dependency in strongly distorted images. The division model that is able to represent high distortion with only one parameter is modified to represent a depth-dependent high distortion lens model. The proposed calibration method obtains more accurate results when compared to existing calibration methods.


Author(s):  
Hidehiko Shishido ◽  
Itaru Kitahara

In sports science research, there are many topics that utilize the body motion of athletes extracted by motion capture system, since motion information is valuable data for improving an athlete’s skills. However, one of the unsolved challenges in motion capture is extraction of athletes’ motion information during the actual game or match, as placing markers on athletes is a challenge during game play. In this research, the authors propose a method for acquisition of motion information without attaching a marker, utilizing computer vision technology. In the proposed method, the three-dimensional world joint position of the athlete’s body can be acquired using just two cameras without any visual markers. Furthermore, the athlete’s three-dimensional joint position during game play can also be obtained without complicated preparations. Camera calibration that estimates the projective relationship between three-dimensional world and two-dimensional image spaces is one of the principal processes for the respective three-dimensional image processing, such as three-dimensional reconstruction and three-dimensional tracking. A strong-calibration method, which needs to set up landmarks with known three-dimensional positions, is a common technique. However, as the target space expands, landmark placement becomes increasingly complicated. Although a weak-calibration method does not need known landmarks, the estimation precision depends on the accuracy of the correspondence between image captures. When multiple cameras are arranged sparsely, sufficient detection of corresponding points is difficult. In this research, the authors propose a calibration method that bridges multiple sparsely distributed cameras using mobile camera images. Appropriate spacing was confirmed between the images through comparative experiments evaluating camera calibration accuracy by changing the number of bridging images. Furthermore, the proposed method was applied to multiple capturing experiments in a large-scale space to verify its robustness. As a relevant example, the proposed method was applied to the three-dimensional skeleton estimation of badminton players. Subsequently, a quantitative evaluation was conducted on camera calibration for the three-dimensional skeleton. The reprojection error of each part of the skeletons and standard deviations were approximately 2.72 and 0.81 mm, respectively, confirming that the proposed method was highly accurate when applied to camera calibration. Consequently, a quantitative evaluation was conducted on the proposed calibration method and a calibration method using the coordinates of eight manual points. In conclusion, the proposed method stabilizes calibration accuracy in the vertical direction of the world coordinate system.


2010 ◽  
Vol 29-32 ◽  
pp. 2692-2697
Author(s):  
Jiu Long Xiong ◽  
Jun Ying Xia ◽  
Xian Quan Xu ◽  
Zhen Tian

Camera calibration establishes the relationship between 2D coordinates in the image and 3D coordinates in the 3D world. BP neural network can model non-linear relationship, and therefore was used for calibrating camera by avoiding the non-linear factors of the camera in this paper. The calibration results are compared with the results of Tsai’s two stage method. The comparison show that calibration method based BP neural network improved the calibration accuracy.


Sign in / Sign up

Export Citation Format

Share Document