Self-calibration method of gyroscope and camera in video stabilization

Author(s):  
Zhengwei Ren ◽  
Chunyi Chen ◽  
Ming Fang
Measurement ◽  
2021 ◽  
Vol 174 ◽  
pp. 109067
Author(s):  
Zhi-Feng Lou ◽  
Li Liu ◽  
Ji-Yun Zhang ◽  
Kuang-chao Fan ◽  
Xiao-Dong Wang

Sensors ◽  
2013 ◽  
Vol 13 (12) ◽  
pp. 16565-16582 ◽  
Author(s):  
Shibin Yin ◽  
Yongjie Ren ◽  
Jigui Zhu ◽  
Shourui Yang ◽  
Shenghua Ye

2021 ◽  
Author(s):  
Wenrun Xiao ◽  
Weikang Wu ◽  
Yinghui Chang ◽  
Jidong Diao ◽  
Yanping Qiao ◽  
...  

1999 ◽  
Author(s):  
Chunhe Gong ◽  
Jingxia Yuan ◽  
Jun Ni

Abstract Robot calibration plays an increasingly important role in manufacturing. For robot calibration on the manufacturing floor, it is desirable that the calibration technique be easy and convenient to implement. This paper presents a new self-calibration method to calibrate and compensate for robot system kinematic errors. Compared with the traditional calibration methods, this calibration method has several unique features. First, it is not necessary to apply an external measurement system to measure the robot end-effector position for the purpose of kinematic identification since the robot measurement system has a sensor as its integral part. Second, this self-calibration is based on distance measurement rather than absolute position measurement for kinematic identification; therefore the calibration of the transformation from the world coordinate system to the robot base coordinate system, known as base calibration, is not necessary. These features not only greatly facilitate the robot system calibration but also shorten the error propagation chain, therefore, increase the accuracy of parameter estimation. An integrated calibration system is designed to validate the effectiveness of this calibration method. Experimental results show that after calibration there is a significant improvement of robot accuracy over a typical robot workspace.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Guotao Xie ◽  
Jing Zhang ◽  
Junfeng Tang ◽  
Hongfei Zhao ◽  
Ning Sun ◽  
...  

Purpose To the industrial application of intelligent and connected vehicles (ICVs), the robustness and accuracy of environmental perception are critical in challenging conditions. However, the accuracy of perception is closely related to the performance of sensors configured on the vehicle. To enhance sensors’ performance further to improve the accuracy of environmental perception, this paper aims to introduce an obstacle detection method based on the depth fusion of lidar and radar in challenging conditions, which could reduce the false rate resulting from sensors’ misdetection. Design/methodology/approach Firstly, a multi-layer self-calibration method is proposed based on the spatial and temporal relationships. Next, a depth fusion model is proposed to improve the performance of obstacle detection in challenging conditions. Finally, the study tests are carried out in challenging conditions, including straight unstructured road, unstructured road with rough surface and unstructured road with heavy dust or mist. Findings The experimental tests in challenging conditions demonstrate that the depth fusion model, comparing with the use of a single sensor, can filter out the false alarm of radar and point clouds of dust or mist received by lidar. So, the accuracy of objects detection is also improved under challenging conditions. Originality/value A multi-layer self-calibration method is conducive to improve the accuracy of the calibration and reduce the workload of manual calibration. Next, a depth fusion model based on lidar and radar can effectively get high precision by way of filtering out the false alarm of radar and point clouds of dust or mist received by lidar, which could improve ICVs’ performance in challenging conditions.


Sign in / Sign up

Export Citation Format

Share Document