scholarly journals Extrinsic Calibration between Camera and LiDAR Sensors by Matching Multiple 3D Planes

Sensors ◽  
2019 ◽  
Vol 20 (1) ◽  
pp. 52 ◽  
Author(s):  
Eung-su Kim ◽  
Soon-Yong Park

This paper proposes a simple extrinsic calibration method for a multi-sensor system which consists of six image cameras and a 16-channel 3D LiDAR sensor using a planar chessboard. The six cameras are mounted on a specially designed hexagonal plate to capture omnidirectional images and the LiDAR sensor is mounted on the top of the plates to capture 3D points in 360 degrees. Considering each camera–LiDAR combination as an independent multi-sensor unit, the rotation and translation between the two sensor coordinates are calibrated. The 2D chessboard corners in the camera image are reprojected into 3D space to fit to a 3D plane with respect to the camera coordinate system. The corresponding 3D point data that scan the chessboard are used to fit to another 3D plane with respect to the LiDAR coordinate system. The rotation matrix is calculated by aligning normal vectors of the corresponding planes. In addition, an arbitrary point on the 3D camera plane is projected to a 3D point on the LiDAR plane, and the distance between the two points are iteratively minimized to estimate the translation matrix. At least three or more planes are used to find accurate external parameters between the coordinate systems. Finally, the estimated transformation is refined using the distance between all chessboard 3D points and the LiDAR plane. In the experiments, quantitative error analysis is done using a simulation tool and real test sequences are also used for calibration consistency analysis.

2021 ◽  
Vol 11 (3) ◽  
pp. 1287
Author(s):  
Tianyan Chen ◽  
Jinsong Lin ◽  
Deyu Wu ◽  
Haibin Wu

Based on the current situation of high precision and comparatively low APA (absolute positioning accuracy) in industrial robots, a calibration method to enhance the APA of industrial robots is proposed. In view of the "hidden" characteristics of the RBCS (robot base coordinate system) and the FCS (flange coordinate system) in the measurement process, a comparatively general measurement and calibration method of the RBCS and the FCS is proposed, and the source of the robot terminal position error is classified into three aspects: positioning error of industrial RBCS, kinematics parameter error of manipulator, and positioning error of industrial robot end FCS. The robot position error model is established, and the relation equation of the robot end position error and the industrial robot model parameter error is deduced. By solving the equation, the parameter error identification and the supplementary results are obtained, and the method of compensating the error by using the robot joint angle is realized. The Leica laser tracker is used to verify the calibration method on ABB IRB120 industrial robot. The experimental results show that the calibration method can effectively enhance the APA of the robot.


2014 ◽  
Vol 568-570 ◽  
pp. 320-325 ◽  
Author(s):  
Feng Shan Huang ◽  
Li Chen

A new CCD camera calibration method based on the translation of Coordinate Measuring Machine (CMM) is proposed. The CMM brings the CCD camera to produce the relative translation with respect to the center of the white ceramic standard sphere along the X, Y, Z axis, and the coordinates of the different positions of the calibration characteristic point in the probe coordinate system can be generated. Meanwhile, the camera captures the image of the white ceramic standard sphere at every position, and the coordinates of the calibration characteristic point in the computer frame coordinate system can be registered. The calibration mathematic model was established, and the calibration steps were given and the calibration system was set up. The comparing calibration result shows that precision of this method is equivalent to that of the special calibration method, and the difference between the calibrating data of these two methods is within ±1μm.


1999 ◽  
Author(s):  
Chunhe Gong ◽  
Jingxia Yuan ◽  
Jun Ni

Abstract Robot calibration plays an increasingly important role in manufacturing. For robot calibration on the manufacturing floor, it is desirable that the calibration technique be easy and convenient to implement. This paper presents a new self-calibration method to calibrate and compensate for robot system kinematic errors. Compared with the traditional calibration methods, this calibration method has several unique features. First, it is not necessary to apply an external measurement system to measure the robot end-effector position for the purpose of kinematic identification since the robot measurement system has a sensor as its integral part. Second, this self-calibration is based on distance measurement rather than absolute position measurement for kinematic identification; therefore the calibration of the transformation from the world coordinate system to the robot base coordinate system, known as base calibration, is not necessary. These features not only greatly facilitate the robot system calibration but also shorten the error propagation chain, therefore, increase the accuracy of parameter estimation. An integrated calibration system is designed to validate the effectiveness of this calibration method. Experimental results show that after calibration there is a significant improvement of robot accuracy over a typical robot workspace.


IEEE Access ◽  
2018 ◽  
Vol 6 ◽  
pp. 48840-48849 ◽  
Author(s):  
Mun-Cheon Kang ◽  
Cheol-Hwan Yoo ◽  
Kwang-Hyun Uhm ◽  
Dae-Hong Lee ◽  
Sung-Jea Ko

Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3706 ◽  
Author(s):  
Joong-Jae Lee ◽  
Mun-Ho Jeong

This paper presents a stereo camera-based head-eye calibration method that aims to find the globally optimal transformation between a robot’s head and its eye. This method is highly intuitive and simple, so it can be used in a vision system for humanoid robots without any complex procedures. To achieve this, we introduce an extended minimum variance approach for head-eye calibration using surface normal vectors instead of 3D point sets. The presented method considers both positional and orientational error variances between visual measurements and kinematic data in head-eye calibration. Experiments using both synthetic and real data show the accuracy and efficiency of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document