Comparison and error analysis of the standard pin-hole and Scheimpflug camera calibration models

Author(s):  
Aritz Legarda ◽  
Alberto Izaguirre ◽  
Nestor Arana ◽  
Aitzol Iturrospe
2020 ◽  
Vol 2020 (16) ◽  
pp. 150-1-150-6
Author(s):  
Paul Romanczyk

Camera-based advanced driver-assistance systems (ADAS) require the mapping from image coordinates into world coordinates to be known. The process of computing that mapping is geometric calibration. This paper provides a series of tests that may be used to assess the goodness of the geometric calibration and compare model forms: 1. Image Coordinate System Test: Validation that different teams are using the same image coordinates. 2. Reprojection Test: Validation of a camera’s calibration by forward projecting targets through the model onto the image plane. 3. Projection Test: Validation of a camera’s calibration by inverse projecting points through the model out into the world. 4. Triangulation Test: Validation of a multi-camera system’s ability to locate a point in 3D. The potential configurations for these tests are driven by automotive use cases. These tests enable comparison and tuning of different calibration models for an as-built camera.


2018 ◽  
Vol 38 (8) ◽  
pp. 0815009
Author(s):  
孙聪 Sun Cong ◽  
刘海波 Liu Haibo ◽  
陈圣义 Chen Shengyi ◽  
尚洋 Shang Yang

2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Mingpei Liang ◽  
Xinyu Huang ◽  
Chung-Hao Chen ◽  
Gaolin Zheng ◽  
Alade Tokuta

Cameras with telephoto lens are usually used to recover details of an object that is either small or located far away from the cameras. However, the calibration of this kind of cameras is not as accurate as the one of cameras with short focal lengths that are commonly used in many vision applications. This paper has two contributions. First, we present a first-order error analysis that shows the relation between focal length and estimation uncertainties of camera parameters. To our knowledge, this error analysis with respect to focal length has not been studied in the area of camera calibration. Second, we propose a robust algorithm to calibrate the camera with a long focal length without using additional devices. By adding a regularization term, our algorithm makes the estimation of the image of the absolute conic well posed. As a consequence, the covariance of camera parameters can be reduced greatly. We further used simulations and real data to verify our proposed algorithm and obtained very stable results.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6319
Author(s):  
Zixuan Bai ◽  
Guang Jiang ◽  
Ailing Xu

In this paper, we introduce a novel approach to estimate the extrinsic parameters between a LiDAR and a camera. Our method is based on line correspondences between the LiDAR point clouds and camera images. We solve the rotation matrix with 3D–2D infinity point pairs extracted from parallel lines. Then, the translation vector can be solved based on the point-on-line constraint. Different from other target-based methods, this method can be performed simply without preparing specific calibration objects because parallel lines are commonly presented in the environment. We validate our algorithm on both simulated and real data. Error analysis shows that our method can perform well in terms of robustness and accuracy.


Author(s):  
J. L. Wang

Abstract. Obtaining accurate image interior and exterior orientations is the key to improve 3D measurement accuracy besides reliable and accurate image matching. A majority of cameras used for those tasks are non-metric cameras. Non-metric cameras commonly suffer various distortions. Generally, there are two ways to remove these distortions: 1) conducting prior camera calibration in a controlled environment; 2) applying self-calibrating bundle adjustment in the application environment. Both approaches have their advantages and disadvantages but one thing is common that there is no universal calibration model available so far which can remove all sorts of distortions on images and systemic errors of image orientations. Instead of developing additional calibration models for camera calibration and self-calibrating adjustment, this paper presents a novel approach which applies self-calibrating bundle adjustment in an iterative fashion: after performing a conventional self-calibrating bundle adjustment, the image coordinates of tie points are re-calculated using the newly obtained self-calibration model coefficients, and the self-calibrating bundle adjustment is applied again in the hope that the remaining distortions and systematic errors will be reduced further within next a few iterations. Using a “virtual image” concept this iterative approach does not require to resample images or/and re-measure tie points during iterations, only costs a few additional iterations computational resource. Several trails under various application environments are conducted using this proposed iterative approach and the results indicate that not only the distortions can be reduced further but also image orientations become much stable after a few iterations.


Sign in / Sign up

Export Citation Format

Share Document