scholarly journals A Decoupled Calibration Method for Camera Intrinsic Parameters and Distortion Coefficients

2016 ◽  
Vol 2016 ◽  
pp. 1-12 ◽  
Author(s):  
Kun Yan ◽  
Hong Tian ◽  
Enhai Liu ◽  
Rujin Zhao ◽  
Yuzhen Hong ◽  
...  

Camera calibration is a necessary process in the field of vision measurement. In this paper, we propose a flexible and high-accuracy method to calibrate a camera. Firstly, we compute the center of radial distortion, which is important to obtain optimal results. Then, based on the radial distortion of the division model, the camera intrinsic parameters and distortion coefficients are solved in a linear way independently. Finally, the intrinsic parameters of the camera are optimized via the Levenberg-Marquardt algorithm. In the proposed method, the distortion coefficients and intrinsic parameters are successfully decoupled; calibration accuracy is further improved through the subsequent optimization process. Moreover, whether it is for relatively small image distortion or distortion larger image, utilizing our method can get a good result. Both simulation and real data experiment demonstrate the robustness and accuracy of the proposed method. Experimental results show that the proposed method can be obtaining a higher accuracy than the classical methods.

Author(s):  
Mingchi Feng ◽  
Xiang Jia ◽  
Jingshu Wang ◽  
Song Feng ◽  
Taixiong Zheng

Multi-cameras system is widely applied in 3D computer vision especially when multiple cameras are distributed on both sides of the measured object. The calibration methods of multi-cameras system are critical to the accuracy of vision measurement and the key is to find an appropriate calibration target. In this paper, a high-precision camera calibration method for multi-cameras system based on transparent glass checkerboard and ray tracing is described, which is used to calibrate multiple cameras distributed on both sides of the glass checkerboard. Firstly, the intrinsic parameters of each camera is obtained by Zhang’s calibration method. Then, multiple cameras capture several images from the front and back of the glass checkerboard with different orientations, and all images contain distinct grid corners. As the cameras on one side are not affected by the refraction of glass checkerboard, extrinsic parameters can be directly calculated. However, the cameras on another side are influenced by the refraction of glass checkerboard, and the direct use of projection model will produce calibration error. A multi-cameras calibration method using refractive projection model and ray tracing is developed to eliminate this error. Furthermore, both synthetic and real data are employed to validate the proposed approach. The experimental results of refractive calibration show that the error of the 3D reconstruction is smaller than 0.2 mm, the relative errors of both rotation and translation are less than 0.014%, and the mean and standard deviation of reprojection error of 4-cameras system are 0.00007 and 0.4543 pixel. The proposed method is flexible, high accurate, and simple to carry out.


Visual calibration is an important researchdirection in the field of robot vision control, and is also one of thecurrent research hotspots. In this paper, the principle of softwarecalibration is described in detail, and a software calibrationmethod based on Halcon optimization is studied and designed. Byusing the operator in the function library, the internal andexternal parameters of the camera are calibrated. The influence ofthe terminal center of the robot and the radial distortion of thecamera lens is fully considered. The method is used to establish thecamera. The relationship between the image coordinated systemand the robot world coordinated system. Experiments show thatthe method has high calibration accuracy and practicability, andis suitable for industrial robot vision system calibration.


2020 ◽  
Vol 14 (2) ◽  
pp. 234-241
Author(s):  
Bin Liu ◽  
Qian Qiao ◽  
Fangfang Han

Background: The 3D laser scanner is a non-contact active-sensing system, which has a number of applications. Many patents have been filed on the technologies for calibrating 3D laser scanner. A precise calibration method is important for measuring the accuracy of the 3D laser scanner. The system model contains three categories of parameters to be calibrated which include the camera intrinsic parameters, distortion coefficients and the light plane parameters. Typically, the calibration process is completed in two steps. Based on Zhang’s method, the calibration of the camera intrinsic parameters and distortion coefficients can be performed. Then, 3D feature points on the light plane should precisely be formed and extracted. Finally, the points are used to calculate the light plane parameters. Methods: In this paper, a rapid calibration method is presented. Without any high precision auxiliary device, only one coplanar reference target is used. By using a group of captured images of the coplanar reference target placed in the field of view arbitrarily, calibration can be performed in one step. Based on the constraint from the planes formed by the target in different directions and the camera imaging model, a large amount of 3D points on the light plane can easily be obtained. The light plane equation in the camera coordinates system can be gathered by executing plane fitting to the 3D points. Results: During the experimental process, the developed 3D laser scanner was calibrated by the proposed method. Then, the measuring accuracy of the system was verified with known distance in vertical direction of 1mm with sequential shifting motion generated by precision translation stage. The average value of the measured distances was found to be 1.010mm. The standard deviation was 0.008mm. Conclusion: Experimental results prove that the proposed calibration method is simple and reliable.


2013 ◽  
Vol 748 ◽  
pp. 704-707
Author(s):  
Lin Li ◽  
Li Xu

An easy method for camera intrinsic parameters and one order radial distortion parameter is studied by the mathematical model of camera on the basis of projective theory. It is not necessary to translate or rotate camera in this method, but it only uses several specially points to derive the equations and get the parameters. The results show that this method could quickly and conveniently calibrate camera intrinsic parameters and one-degree radial distortion parameter with good stability and precision, which can be used in servo mechanical arm system.


1997 ◽  
Vol 36 (5) ◽  
pp. 61-68 ◽  
Author(s):  
Hermann Eberl ◽  
Amar Khelil ◽  
Peter Wilderer

A numerical method for the identification of parameters of nonlinear higher order differential equations is presented, which is based on the Levenberg-Marquardt algorithm. The estimation of the parameters can be performed by using several reference data sets simultaneously. This leads to a multicriteria optimization problem, which will be treated by using the Pareto optimality concept. In this paper, the emphasis is put on the presentation of the calibration method. As an example identification of the parameters of a nonlinear hydrological transport model for urban runoff is included, but the method can be applied to other problems as well.


2021 ◽  
Vol 11 (2) ◽  
pp. 582
Author(s):  
Zean Bu ◽  
Changku Sun ◽  
Peng Wang ◽  
Hang Dong

Calibration between multiple sensors is a fundamental procedure for data fusion. To address the problems of large errors and tedious operation, we present a novel method to conduct the calibration between light detection and ranging (LiDAR) and camera. We invent a calibration target, which is an arbitrary triangular pyramid with three chessboard patterns on its three planes. The target contains both 3D information and 2D information, which can be utilized to obtain intrinsic parameters of the camera and extrinsic parameters of the system. In the proposed method, the world coordinate system is established through the triangular pyramid. We extract the equations of triangular pyramid planes to find the relative transformation between two sensors. One capture of camera and LiDAR is sufficient for calibration, and errors are reduced by minimizing the distance between points and planes. Furthermore, the accuracy can be increased by more captures. We carried out experiments on simulated data with varying degrees of noise and numbers of frames. Finally, the calibration results were verified by real data through incremental validation and analyzing the root mean square error (RMSE), demonstrating that our calibration method is robust and provides state-of-the-art performance.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3949 ◽  
Author(s):  
Wei Li ◽  
Mingli Dong ◽  
Naiguang Lu ◽  
Xiaoping Lou ◽  
Peng Sun

An extended robot–world and hand–eye calibration method is proposed in this paper to evaluate the transformation relationship between the camera and robot device. This approach could be performed for mobile or medical robotics applications, where precise, expensive, or unsterile calibration objects, or enough movement space, cannot be made available at the work site. Firstly, a mathematical model is established to formulate the robot-gripper-to-camera rigid transformation and robot-base-to-world rigid transformation using the Kronecker product. Subsequently, a sparse bundle adjustment is introduced for the optimization of robot–world and hand–eye calibration, as well as reconstruction results. Finally, a validation experiment including two kinds of real data sets is designed to demonstrate the effectiveness and accuracy of the proposed approach. The translation relative error of rigid transformation is less than 8/10,000 by a Denso robot in a movement range of 1.3 m × 1.3 m × 1.2 m. The distance measurement mean error after three-dimensional reconstruction is 0.13 mm.


2005 ◽  
Author(s):  
Zhijing Yu ◽  
Jiwei Ma ◽  
Yuquan Ma ◽  
Rensheng Che ◽  
Zhihong Li ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document