extrinsic parameter
Recently Published Documents


TOTAL DOCUMENTS

44
(FIVE YEARS 7)

H-INDEX

7
(FIVE YEARS 0)

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Qin Shi ◽  
Huansheng Song ◽  
Shijie Sun

Calibration of extrinsic parameters of the RGB-D camera can be applied in many fields, such as 3D scene reconstruction, robotics, and target detection. Many calibration methods employ a specific calibration object (i.e., a chessboard, cuboid, etc.) to calibrate the extrinsic parameters of the RGB-D color camera without using the depth map. As a result, it is difficult to simplify the calibration process, and the color sensor gets calibrated instead of the depth sensor. To this end, we propose a method that employs the depth map to perform extrinsic calibration automatically. In detail, the depth map is first transformed to a 3D point cloud in the camera coordinate system, and then the planes in the 3D point cloud are automatically detected using the Maximum Likelihood Estimation Sample Consensus (MLESAC) method. After that, according to the constraint relationship between the ground plane and the world coordinate system, all planes are traversed and screened until the ground plane is obtained. Finally, the extrinsic parameters are calculated using the spatial relationship between the ground plane and the camera coordinate system. The results show that the mean roll angle error of extrinsic parameter calibration was −1.14°. The mean pitch angle error was 4.57°, and the mean camera height error was 3.96 cm. The proposed method can accurately and automatically estimate the extrinsic parameters of a camera. Furthermore, after parallel optimization, it can achieve real-time performance for automatically estimating a robot’s attitude.



2021 ◽  
Vol 11 (13) ◽  
pp. 6014
Author(s):  
Kai Guo ◽  
Hu Ye ◽  
Junhao Gu ◽  
Honglin Chen

The aim of the perspective-three-point (P3P) problem is to estimate extrinsic parameters of a camera from three 2D–3D point correspondences, including the orientation and position information. All the P3P solvers have a multi-solution phenomenon that is up to four solutions and needs a fully calibrated camera. In contrast, in this paper we propose a novel method for intrinsic and extrinsic parameter estimation based on three 2D–3D point correspondences with known camera position. Our core contribution is to build a new, virtual camera system whose frame and image plane are defined by the original 3D points, to build a new, intermediate world frame by the original image plane and the original 2D image points, and convert our problem to a P3P problem. Then, the intrinsic and extrinsic parameter estimation is to solve frame transformation and the P3P problem. Lastly, we solve the multi-solution problem by image resolution. Experimental results show its accuracy, numerical stability and uniqueness of the solution for intrinsic and extrinsic parameter estimation in synthetic data and real images.



2021 ◽  
Author(s):  
Yunsong Zhou ◽  
Yuan He ◽  
Hongzi Zhu ◽  
Cheng Wang ◽  
Hongyang Li ◽  
...  


Electronics ◽  
2021 ◽  
Vol 10 (8) ◽  
pp. 970
Author(s):  
Liling Zhou ◽  
Yingzi Wang ◽  
Yunfei Liu ◽  
Haifeng Zhang ◽  
Shuaikang Zheng ◽  
...  

The emergence of Automated Guided Vehicle (AGV) has greatly increased the efficiency of the transportation industry, which put forward the urgent requirement for the accuracy and ease of use of 2D planar motion robot positioning. Multi-sensor fusion positioning has gradually become an important technical route to improve overall efficiency when dealing with AGV positioning. As a sensor directly acquiring depth, the RGB-D camera has received extensive attention in indoor positioning in recent years, while wheel odometry is the sensor that comes with most two-dimensional planar motion robots, and its parameters will not change over time. Both the RGB-D camera and the wheel odometry are commonly used sensors for indoor robot positioning, but the existing research on the fusion of RGB-D and wheel odometry is limited based on classic filtering algorithms; few fusion solutions based on optimization algorithm of them are available at present. To ensure the practicability and greatly improve the accuracy of RGB-D and odometry fusion positioning scheme, this paper proposed a tightly-coupled positioning scheme of online calibrated RGB-D camera and wheel odometry based on SE(2) plane constraints. Experiments have proved that the angle accuracy of the extrinsic parameter in the calibration part is less than 0.5 degrees, and the displacement of the extrinsic parameter reaches the millimeter level. The field-test positioning accuracy of the positioning system we proposed having reached centimeter-level on the dataset without pre-calibration, which is better than ORB-SLAM2 relying solely on RGB-D cameras. The experimental results verify the excellent performance of the frame in positioning accuracy and ease of use and prove that it can be a potential promising technical solution in the field of two-dimensional AGV positioning.



Author(s):  
Yunsong Zhou ◽  
Yuan He ◽  
Hongzi Zhu ◽  
Cheng Wang ◽  
Hongyang Li ◽  
...  


2020 ◽  
Vol 31 (04) ◽  
pp. 2050060
Author(s):  
Zineb Tahiri ◽  
Kamal Jetto ◽  
Marouane Bouadi ◽  
Abdelilah Benyoussef ◽  
Abdallah El Kenz

In this paper, we have tried to point out the features of the correlation between the lanes of a two-lane road, created by the entry of this facility. For this purpose, we have adopted a quasi-one-dimensional system composed of a diverging node connecting two roads and where no lanes’ changing is allowed. Our study has highlighted the strong effect of a node. We have found that if we create a disturbance in one lane, a spontaneous symmetry breaking occurs in the whole system. In fact, a self-anisotropy is produced at the node, to which the system responds via a self-organization mechanism. Those results have urged us to investigate the anisotropy as an extrinsic parameter. By privileging one lane over the other at the node, we have been able to confirm that the system can always get self-organized and that three phases can be established: the symmetric high density phase, the asymmetric low density phase and the asymmetric phase of transition low density/high density. Finally, we have found that the system is strongly correlated when it is in a symmetric phase, and is not when in an asymmetric phase. This finding brought us to the assumption that the cross-correlation of the observables of a quasi-one-dimensional system can be considered as an order parameter that defines the phases’ transitions.



Sensors ◽  
2019 ◽  
Vol 19 (22) ◽  
pp. 4989
Author(s):  
Truong ◽  
Philips ◽  
Deligiannis ◽  
Abrahamyan ◽  
Guan

Extrinsic camera calibration is essential for any computer vision task in a camera network. Typically, researchers place a calibration object in the scene to calibrate all the cameras in a camera network. However, when installing cameras in the field, this approach can be costly and impractical, especially when recalibration is needed. This paper proposes a novel, accurate and fully automatic extrinsic calibration framework for camera networks with partially overlapping views. The proposed method considers the pedestrians in the observed scene as the calibration objects and analyzes the pedestrian tracks to obtain extrinsic parameters. Compared to the state of the art, the new method is fully automatic and robust in various environments. Our method detect human poses in the camera images and then models walking persons as vertical sticks. We apply a brute-force method to determines the correspondence between persons in multiple camera images. This information along with 3D estimated locations of the top and the bottom of the pedestrians are then used to compute the extrinsic calibration matrices. We also propose a novel method to calibrate the camera network by only using the top and centerline of the person when the bottom of the person is not available in heavily occluded scenes. We verified the robustness of the method in different camera setups and for both single and multiple walking people. The results show that the triangulation error of a few centimeters can be obtained. Typically, it requires less than one minute of observing the walking people to reach this accuracy in controlled environments. It also just takes a few minutes to collect enough data for the calibration in uncontrolled environments. Our proposed method can perform well in various situations such as multi-person, occlusions, or even at real intersections on the street.



Sign in / Sign up

Export Citation Format

Share Document