Binocular vision-based 3-D trajectory following for autonomous robotic manipulation

Robotica ◽  
2007 ◽  
Vol 25 (5) ◽  
pp. 615-626 ◽  
Author(s):  
Wen-Chung Chang

SUMMARYRobotic manipulators that have interacted with uncalibrated environments typically have limited positioning and tracking capabilities, if control tasks cannot be appropriately encoded using available features in the environments. Specifically, to perform 3-D trajectory following operations employing binocular vision, it seems necessary to have a priori knowledge on pointwise correspondence information between two image planes. However, such an assumption cannot be made for any smooth 3-D trajectories. This paper describes how one might enhance autonomous robotic manipulation for 3-D trajectory following tasks using eye-to-hand binocular visual servoing. Based on a novel encoded error, an image-based feedback control law is proposed without assuming pointwise binocular correspondence information. The proposed control approach can guarantee task precision by employing only an approximately calibrated binocular vision system. The goal of the autonomous task is to drive a tool mounted on the end-effector of the robotic manipulator to follow a visually determined smooth 3-D target trajectory in desired speed with precision. The proposed control architecture is suitable for applications that require precise 3-D positioning and tracking in unknown environments. Our approach is successfully validated in a real task environment by performing experiments with an industrial robotic manipulator.

Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5271
Author(s):  
Di Fan ◽  
Yanyang Liu ◽  
Xiaopeng Chen ◽  
Fei Meng ◽  
Xilong Liu ◽  
...  

Three-dimensional (3D) triangulation based on active binocular vision has increasing amounts of applications in computer vision and robotics. An active binocular vision system with non-fixed cameras needs to calibrate the stereo extrinsic parameters online to perform 3D triangulation. However, the accuracy of stereo extrinsic parameters and disparity have a significant impact on 3D triangulation precision. We propose a novel eye gaze based 3D triangulation method that does not use stereo extrinsic parameters directly in order to reduce the impact. Instead, we drive both cameras to gaze at a 3D spatial point P at the optical center through visual servoing. Subsequently, we can obtain the 3D coordinates of P through the intersection of the two optical axes of both cameras. We have performed experiments to compare with previous disparity based work, named the integrated two-pose calibration (ITPC) method, using our robotic bionic eyes. The experiments show that our method achieves comparable results with ITPC.


2014 ◽  
Vol 22 (8) ◽  
pp. 9134 ◽  
Author(s):  
Yi Cui ◽  
Fuqiang Zhou ◽  
Yexin Wang ◽  
Liu Liu ◽  
He Gao

Sign in / Sign up

Export Citation Format

Share Document