extrinsic parameters
Recently Published Documents


TOTAL DOCUMENTS

168
(FIVE YEARS 51)

H-INDEX

12
(FIVE YEARS 4)

2021 ◽  
Vol 130 (24) ◽  
pp. 243102
Author(s):  
Anas Mujahid ◽  
Muhammad Imran ◽  
Huanrong Fan ◽  
Taoli Yuan ◽  
Hasnain Ali ◽  
...  

2021 ◽  
Vol 47 (4) ◽  
pp. 162-169
Author(s):  
Mohammed Aldelgawy ◽  
Isam Abu-Qasmieh

This paper aims to calibrate smartphone’s rear dual camera system which is composed of two lenses, namely; wide-angle lens and telephoto lens. The proposed approach handles large sized images. Calibration was done by capturing 13 photos for a chessboard pattern from different exposure positions. First, photos were captured in dual camera mode. Then, for both wide-angle and telephoto lenses, image coordinates for node points of the chessboard were extracted. Afterwards, intrinsic, extrinsic, and lens distortion parameters for each lens were calculated. In order to enhance the accuracy of the calibration model, a constrained least-squares solution was applied. The applied constraint was that the relative extrinsic parameters of both wide-angle and telephoto lenses were set as constant regardless of the exposure position. Moreover, photos were rectified in order to eliminate the effect of lens distortion. For results evaluation, two oriented photos were chosen to perform a stereo-pair intersection. Then, the node points of the chessboard pattern were used as check points.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 8112
Author(s):  
Xudong Lv ◽  
Shuo Wang ◽  
Dong Ye

As an essential procedure of data fusion, LiDAR-camera calibration is critical for autonomous vehicles and robot navigation. Most calibration methods require laborious manual work, complicated environmental settings, and specific calibration targets. The targetless methods are based on some complex optimization workflow, which is time-consuming and requires prior information. Convolutional neural networks (CNNs) can regress the six degrees of freedom (6-DOF) extrinsic parameters from raw LiDAR and image data. However, these CNN-based methods just learn the representations of the projected LiDAR and image and ignore the correspondences at different locations. The performances of these CNN-based methods are unsatisfactory and worse than those of non-CNN methods. In this paper, we propose a novel CNN-based LiDAR-camera extrinsic calibration algorithm named CFNet. We first decided that a correlation layer should be used to provide matching capabilities explicitly. Then, we innovatively defined calibration flow to illustrate the deviation of the initial projection from the ground truth. Instead of directly predicting the extrinsic parameters, we utilize CFNet to predict the calibration flow. The efficient Perspective-n-Point (EPnP) algorithm within the RANdom SAmple Consensus (RANSAC) scheme is applied to estimate the extrinsic parameters with 2D–3D correspondences constructed by the calibration flow. Due to its consideration of the geometric information, our proposed method performed better than the state-of-the-art CNN-based methods on the KITTI datasets. Furthermore, we also tested the flexibility of our approach on the KITTI360 datasets.


Nanomaterials ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 2095
Author(s):  
Ruksan Nadarajah ◽  
Leyla Tasdemir ◽  
Christian Thiel ◽  
Soma Salamon ◽  
Anna S. Semisalova ◽  
...  

Magnetic-field-induced strand formation of ferromagnetic Fe-Ni nanoparticles in a PMMA-matrix is correlated with the intrinsic material parameters, such as magnetization, particle size, composition, and extrinsic parameters, including magnetic field strength and viscosity. Since various factors can influence strand formation, understanding the composite fabrication process that maintains the strand lengths of Fe-Ni in the generated structures is a fundamental step in predicting the resulting structures. Hence, the critical dimensions of the strands (length, width, spacing, and aspect ratio) are investigated in the experiments and simulated via different intrinsic and extrinsic parameters. Optimal parameters were found by optical microscopy measurements and finite-element simulations using COMSOL for strand formation of Fe50Ni50 nanoparticles. The anisotropic behavior of the aligned strands was successfully characterized through magnetometry measurements. Compared to the unaligned samples, the magnetically aligned strands exhibit enhanced conductivity, increasing the current by a factor of 1000.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Chuan He ◽  
Lianxiong Liu ◽  
Changhua Hu

In the process of the deformation monitoring for large-scale structure, the mobile vision method is often used. However, most of the existent researches rarely consider the real-time property and the variation of the intrinsic parameters. This paper proposes a real-time deformation monitoring method for the large-scale structure based on a relay camera. First, we achieve the real-time pose-position relationship by using the relay camera and the coded mark points whose coordinates are known. The real-time extrinsic parameters of the measuring camera are then solved according to the constraint relationship between the relay camera and the measuring camera. Second, the real-time intrinsic parameters of the measuring camera are calculated based on the real-time constraint relationship among the extrinsic parameters, the intrinsic parameters, and the fundamental matrix. Finally, the coordinates of the noncoded measured mark points, which are affixed to the surface of the structure, are achieved. Experimental results show that the accuracy of the proposed method is higher than 1.8 mm. Besides, the proposed method also possesses the real-time and automation property.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Qin Shi ◽  
Huansheng Song ◽  
Shijie Sun

Calibration of extrinsic parameters of the RGB-D camera can be applied in many fields, such as 3D scene reconstruction, robotics, and target detection. Many calibration methods employ a specific calibration object (i.e., a chessboard, cuboid, etc.) to calibrate the extrinsic parameters of the RGB-D color camera without using the depth map. As a result, it is difficult to simplify the calibration process, and the color sensor gets calibrated instead of the depth sensor. To this end, we propose a method that employs the depth map to perform extrinsic calibration automatically. In detail, the depth map is first transformed to a 3D point cloud in the camera coordinate system, and then the planes in the 3D point cloud are automatically detected using the Maximum Likelihood Estimation Sample Consensus (MLESAC) method. After that, according to the constraint relationship between the ground plane and the world coordinate system, all planes are traversed and screened until the ground plane is obtained. Finally, the extrinsic parameters are calculated using the spatial relationship between the ground plane and the camera coordinate system. The results show that the mean roll angle error of extrinsic parameter calibration was −1.14°. The mean pitch angle error was 4.57°, and the mean camera height error was 3.96 cm. The proposed method can accurately and automatically estimate the extrinsic parameters of a camera. Furthermore, after parallel optimization, it can achieve real-time performance for automatically estimating a robot’s attitude.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4643
Author(s):  
Sang Jun Lee ◽  
Jeawoo Lee ◽  
Wonju Lee ◽  
Cheolhun Jang

In intelligent vehicles, extrinsic camera calibration is preferable to be conducted on a regular basis to deal with unpredictable mechanical changes or variations on weight load distribution. Specifically, high-precision extrinsic parameters between the camera coordinate and the world coordinate are essential to implement high-level functions in intelligent vehicles such as distance estimation and lane departure warning. However, conventional calibration methods, which solve a Perspective-n-Point problem, require laborious work to measure the positions of 3D points in the world coordinate. To reduce this inconvenience, this paper proposes an automatic camera calibration method based on 3D reconstruction. The main contribution of this paper is a novel reconstruction method to recover 3D points on planes perpendicular to the ground. The proposed method jointly optimizes reprojection errors of image features projected from multiple planar surfaces, and finally, it significantly reduces errors in camera extrinsic parameters. Experiments were conducted in synthetic simulation and real calibration environments to demonstrate the effectiveness of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document