Single image based camera calibration and pose estimation of the end-effector of a robot

Author(s):  
R. A. Boby ◽  
S. K. Saha
Author(s):  
Kulalvaimozhi. V. P. ◽  
Germanus Alex. M ◽  
John Peter. S

Virtual human bodies, clothing, and hair are widely used in a number of scenarios such as 3D animated movies, gaming, and online fashion. Machine learning can be used to construct data-driven 3D human bodies, clothing, and hair. In this thesis, we provide a solution to 3D shape and pose estimation under the most challenging situation where only a single image is available and the image is captured in a natural environment with unknown camera calibration. We also demonstrate that a simplified 2D clothing model helps to increase the accuracy of 2D body shape estimation significantly.


2018 ◽  
Vol 2018 ◽  
pp. 1-13
Author(s):  
M. Vynnycky ◽  
G. M. M. Reddy

The perspective 3-point (P3P) problem, also known as pose estimation, has its origins in camera calibration and is of importance in many fields: for example, computer animation, automation, image analysis, and robotics. One possibility is to formulate it mathematically in terms of finding the solution to a quartic equation. However, there is yet no quantitative knowledge as to how control-point spacing affects the solution structure—in particular, the multisolution phenomenon. Here, we consider this problem through an algebraic analysis of the quartic’s coefficients and its discriminant and find that there are significant variations in the likelihood of two or four solutions, depending on how the spacing is chosen. The analysis indicates that although it is never possible to remove the occurrence of the four-solution case completely, it could be possible to choose spacings that would maximize the occurrence of two real solutions. Moreover, control-point spacing is found to impact significantly on the reality conditions for the solution of the quartic equation.


2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Mingrui Luo ◽  
En Li ◽  
Rui Guo ◽  
Jiaxin Liu ◽  
Zize Liang

Redundant manipulators are suitable for working in narrow and complex environments due to their flexibility. However, a large number of joints and long slender links make it hard to obtain the accurate end-effector pose of the redundant manipulator directly through the encoders. In this paper, a pose estimation method is proposed with the fusion of vision sensors, inertial sensors, and encoders. Firstly, according to the complementary characteristics of each measurement unit in the sensors, the original data is corrected and enhanced. Furthermore, an improved Kalman filter (KF) algorithm is adopted for data fusion by establishing the nonlinear motion prediction of the end-effector and the synchronization update model of the multirate sensors. Finally, the radial basis function (RBF) neural network is used to adaptively adjust the fusion parameters. It is verified in experiments that the proposed method achieves better performances on estimation error and update frequency than the original extended Kalman filter (EKF) and unscented Kalman filter (UKF) algorithm, especially in complex environments.


Author(s):  
Jiacheng Rong ◽  
Guanglin Dai ◽  
Pengbo Wang

AbstractFor automating the harvesting of bunches of tomatoes in a greenhouse, the end-effector needs to reach the exact cutting point and adaptively adjust the pose of peduncles. In this paper, a method is proposed for peduncle cutting point localization and pose estimation. Images captured in real time at a fixed long-distance are detected using the YOLOv4-Tiny detector with a precision of 92.7% and a detection speed of 0.0091 s per frame, then the YOLACT +  + Network with mAP of 73.1 and a time speed of 0.109 s per frame is used to segment the close-up distance. The segmented peduncle mask is fitted to the curve using least squares and three key points on the curve are found. Finally, a geometric model is established to estimate the pose of the peduncle with an average error of 4.98° in yaw angle and 4.75° in pitch angle over the 30 sets of tests.


Sign in / Sign up

Export Citation Format

Share Document