A Camera Calibration Method Based on Plane Grid Points

2011 ◽  
Vol 230-232 ◽  
pp. 723-727 ◽  
Author(s):  
Bao Feng Zhang ◽  
Xiu Zhen Tian ◽  
Xiao Ling Zhang

In order to simplify previous camera calibration method, this paper put forward an easy camera calibration method based on plane grid points on the foundation of Heikkila plane model calibration method. Intrinsic and extrinsic parameters of the camera are calibrated with MATLAB, then the rotation matrix and the translation vector are calculated. The experiment results show this method is not only simple in practice, but also can meet the needs of computer vision systems.

2015 ◽  
Vol 719-720 ◽  
pp. 1184-1190
Author(s):  
Shuang Ran ◽  
Long Ye ◽  
Jing Ling Wang ◽  
Qin Zhang

The optimization of the camera’s intrinsic and extrinsic parameters is a key step after obtaining the initialized parameters’ state by considering the homography between the board space plane and the image plane in Zhengyou Zhang method. In this paper, we proposed a camera calibration optimization algorithm by adopting genetic algorithm and the simulated annealing algorithm. The experiment results demonstrate that our algorithm can improve the precision of the camera calibration to a certain extent.


2018 ◽  
Vol 10 (8) ◽  
pp. 1298 ◽  
Author(s):  
Lei Yin ◽  
Xiangjun Wang ◽  
Yubo Ni ◽  
Kai Zhou ◽  
Jilong Zhang

Multi-camera systems are widely used in the fields of airborne remote sensing and unmanned aerial vehicle imaging. The measurement precision of these systems depends on the accuracy of the extrinsic parameters. Therefore, it is important to accurately calibrate the extrinsic parameters between the onboard cameras. Unlike conventional multi-camera calibration methods with a common field of view (FOV), multi-camera calibration without overlapping FOVs has certain difficulties. In this paper, we propose a calibration method for a multi-camera system without common FOVs, which is used on aero photogrammetry. First, the extrinsic parameters of any two cameras in a multi-camera system is calibrated, and the extrinsic matrix is optimized by the re-projection error. Then, the extrinsic parameters of each camera are unified to the system reference coordinate system by using the global optimization method. A simulation experiment and a physical verification experiment are designed for the theoretical arithmetic. The experimental results show that this method is operable. The rotation error angle of the camera’s extrinsic parameters is less than 0.001rad and the translation error is less than 0.08 mm.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4643
Author(s):  
Sang Jun Lee ◽  
Jeawoo Lee ◽  
Wonju Lee ◽  
Cheolhun Jang

In intelligent vehicles, extrinsic camera calibration is preferable to be conducted on a regular basis to deal with unpredictable mechanical changes or variations on weight load distribution. Specifically, high-precision extrinsic parameters between the camera coordinate and the world coordinate are essential to implement high-level functions in intelligent vehicles such as distance estimation and lane departure warning. However, conventional calibration methods, which solve a Perspective-n-Point problem, require laborious work to measure the positions of 3D points in the world coordinate. To reduce this inconvenience, this paper proposes an automatic camera calibration method based on 3D reconstruction. The main contribution of this paper is a novel reconstruction method to recover 3D points on planes perpendicular to the ground. The proposed method jointly optimizes reprojection errors of image features projected from multiple planar surfaces, and finally, it significantly reduces errors in camera extrinsic parameters. Experiments were conducted in synthetic simulation and real calibration environments to demonstrate the effectiveness of the proposed method.


2013 ◽  
Vol 694-697 ◽  
pp. 1896-1901
Author(s):  
Hong Zheng ◽  
Zhen Qiang Liu ◽  
Kai Zhang

Self-calibration of stereo rig is essential to many computer vision applications. In this paper, a new self-calibration method is proposed for a binocular stereo rig undergoing a single motion with varying intrinsic and extrinsic parameters. Firstly, we build up a stereo rig model based on the basic platform to describe the transformation of the stereo rig during the motion. Secondly, the characteristics of singular values of the essential matrix are used to estimate the intrinsic parameters of camera. Finally, analyzing the transformation relation between different views, the relative position of cameras and motion of the stereo rig are estimated. Experimental results for both synthetic data and real images are provided to show the performance of the proposed method.


2020 ◽  
Vol 10 (20) ◽  
pp. 7188
Author(s):  
Lode Jorissen ◽  
Ryutaro Oi ◽  
Koki Wakunami ◽  
Yasuyuki Ichihashi ◽  
Gauthier Lafruit ◽  
...  

Light field 3D displays require a precise alignment between the display source and the micromirror-array screen for error free 3D visualization. Hence, calibrating the system using an external camera becomes necessary, before displaying any 3D contents. The inter-dependency of the intrinsic and extrinsic parameters of display-source, calibration-camera, and micromirror-array screen, makes the calibration process very complex and error-prone. Thus, several assumptions are made with regard to the display setup, in order to simplify the calibration. A fully automatic calibration method based on several such assumptions was reported by us earlier. Here, in this paper, we report a method that uses no such assumptions, but yields a better calibration. The proposed method adapts an optical solution where the micromirror-array screen is fabricated as a computer generated hologram with a tiny diffuser engraved at one corner of each elemental micromirror in the array. The calibration algorithm uses these diffusing areas as markers to determine the relation between the pixels of display source and the mirrors in the micromirror-array screen. Calibration results show that virtually reconstructed 3D scenes align well with the real world contents, and are free from any distortion. This method also eliminates the position dependency of display source, calibration-camera, and mirror-array screen during calibration, which enables easy setup of the display system.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6319
Author(s):  
Zixuan Bai ◽  
Guang Jiang ◽  
Ailing Xu

In this paper, we introduce a novel approach to estimate the extrinsic parameters between a LiDAR and a camera. Our method is based on line correspondences between the LiDAR point clouds and camera images. We solve the rotation matrix with 3D–2D infinity point pairs extracted from parallel lines. Then, the translation vector can be solved based on the point-on-line constraint. Different from other target-based methods, this method can be performed simply without preparing specific calibration objects because parallel lines are commonly presented in the environment. We validate our algorithm on both simulated and real data. Error analysis shows that our method can perform well in terms of robustness and accuracy.


2012 ◽  
Vol 182-183 ◽  
pp. 649-654
Author(s):  
Yong Yong Duan ◽  
Xiu Mei Zhang ◽  
Long Zhao

To improve the accuracy and efficiency of field camera calibration, an improved fast camera calibration method is proposed. Nonlinear translation algorithm based on calibration field is adopted to solve intrinsic parameter while an improved vanishing points method is introduced to get extrinsic parameters. Surveillance camera is used to validate the proposed approach. Experiment results show that the algorithm is convenient and feasible.


2015 ◽  
Vol 719-720 ◽  
pp. 1217-1222 ◽  
Author(s):  
Ying Zhu ◽  
Long Ye ◽  
Jing Ling Wang ◽  
Qin Zhang

In order to capture high quality binocular stereo video, it is necessary to manipulate both the convergence and the interaxial to take control of the depth of objects within the 3D space. Therefore the scene understanding becomes important as it can increase the efficiency of parameters control. In this paper, a camera calibration based multi-objects location method is introduced with the motivation that supply prior information of adjusting the convergence and the interaxial during capturing. Firstly, we are intended to calibrate the two cameras to get the intrinsic and extrinsic parameters. And then, we select points of the object in the images taken by left and right cameras respectively to determine its locations in the two images. With three-dimensional coordinate of objects, the distance between the object and camera baseline is calculated by mathematical methods.


2014 ◽  
Vol 981 ◽  
pp. 348-351
Author(s):  
Xiao Yang Yu ◽  
Xiao Liang Meng ◽  
Hai Bin Wu ◽  
Xiao Ming Sun ◽  
Li Wang

In coded-structured light three dimensional system, system calibration plays a vital role for the measurement accuracy. The camera calibration method is very mature, but the study about projector calibration is less. Therefore, this paper proposes a projector calibration method with simple calibration process and high accuracy. This method combines the Zhang’?s plane model calibration method with orthogonal phase shift coding. In calibration process, this paper uses phase shift coding pattern to establish the relationship of projector image and camera corner point coordinates. According to the image coordinates in the projector’?s perspective, we program and calculate the projector’?s internal and external parameters matrix based on the Zhang’?s plane model calibration toolbox. The results show that the proposed method is simple and flexible, the maximum relative error of the calibration parameters is 0.03%, and it meets the requirements of system calibration in medical or industrial fields.


Sign in / Sign up

Export Citation Format

Share Document