The On-Line Calibration Research of the Picking Robot Binocular Vision System

2012 ◽  
Vol 522 ◽  
pp. 634-637
Author(s):  
Ke Yin Chen ◽  
Xiang Jun Zou ◽  
Li Juan Chen

In the picking robot binocular vision systems research, the camera calibration is often an indispensable step and these basements to locate the target of the object and rebuild the three-dimensional construction based on the robot stereo vision for the follow-up study. So, searching for a high accuracy and simple camera calibration algorithm is of great significance and necessary. However, For most of these camera calibration algorithms, it is necessary to establish a reference object, namely the target, in front of the camera at present, but posing the target is very not convenient or almost impossible in some cases. Therefore, a picking robot online calibration algorithm based on the vision scene was proposed by studying the work environment characteristics of the picking robot binocular vision system and the invariant projective geometry. The experimental results showed that this algorithm’s calibration accuracy and precision good meets to the requirement of the robot binocular vision system camera calibration in the complex environment.

2013 ◽  
Vol 694-697 ◽  
pp. 1925-1930
Author(s):  
Xin Jie Wang ◽  
Zhi Lin Yang ◽  
Jie Liu

Robot location is a key technology of quadruped robot with Hand-fused-Foot. The location method based on binocular vision system is studied for quadruped robot with Hand-fused-Foot. After obtaining image by a single camera, the object is segmented by using characteristic extraction method based on color characteristic. Then image processing such as filtering (de-noising) and opening is performed. The object is identified and its centroid coordinate in image is obtained. Location of robot based on environment reference--object coordinate is achieved. Experiments show the effectiveness and the accuracy (within 4cm) of the method.


2013 ◽  
Vol 347-350 ◽  
pp. 883-890 ◽  
Author(s):  
Jie Shen ◽  
Hong Ye Sun ◽  
Hui Bin Wang ◽  
Zhe Chen ◽  
Yi Wei

For the underwater target detecting task, a binocular vision system specialized to the underwater optical environment is proposed. The hardware platform is comprised of a image acquising unit, a image processing unit and a upper computer. Accordingly, the loaded software system is operated for the camera calibration, image preprocessing, feature point extraction, stereo matching and the three-dimensional restoration. The improved Harris operator is introduced for the three-dimensional reconstruction, considering the high scattering and strong attenuation by the underwater optical environment. The experiment results prove that the improved Harris operator is better adapt to the complex underwater optical environment and the whole system has the ability to obtain the three-dimensional coordinate of the underwater target more efficient and accurate.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5271
Author(s):  
Di Fan ◽  
Yanyang Liu ◽  
Xiaopeng Chen ◽  
Fei Meng ◽  
Xilong Liu ◽  
...  

Three-dimensional (3D) triangulation based on active binocular vision has increasing amounts of applications in computer vision and robotics. An active binocular vision system with non-fixed cameras needs to calibrate the stereo extrinsic parameters online to perform 3D triangulation. However, the accuracy of stereo extrinsic parameters and disparity have a significant impact on 3D triangulation precision. We propose a novel eye gaze based 3D triangulation method that does not use stereo extrinsic parameters directly in order to reduce the impact. Instead, we drive both cameras to gaze at a 3D spatial point P at the optical center through visual servoing. Subsequently, we can obtain the 3D coordinates of P through the intersection of the two optical axes of both cameras. We have performed experiments to compare with previous disparity based work, named the integrated two-pose calibration (ITPC) method, using our robotic bionic eyes. The experiments show that our method achieves comparable results with ITPC.


2021 ◽  
Vol 34 (1) ◽  
Author(s):  
Zunan Gu ◽  
Ji Chen ◽  
Chuansong Wu

AbstractCurrent research of binocular vision systems mainly need to resolve the camera’s intrinsic parameters before the reconstruction of three-dimensional (3D) objects. The classical Zhang’ calibration is hardly to calculate all errors caused by perspective distortion and lens distortion. Also, the image-matching algorithm of the binocular vision system still needs to be improved to accelerate the reconstruction speed of welding pool surfaces. In this paper, a preset coordinate system was utilized for camera calibration instead of Zhang’ calibration. The binocular vision system was modified to capture images of welding pool surfaces by suppressing the strong arc interference during gas metal arc welding. Combining and improving the algorithms of speeded up robust features, binary robust invariant scalable keypoints, and KAZE, the feature information of points (i.e., RGB values, pixel coordinates) was extracted as the feature vector of the welding pool surface. Based on the characteristics of the welding images, a mismatch-elimination algorithm was developed to increase the accuracy of image-matching algorithms. The world coordinates of matching feature points were calculated to reconstruct the 3D shape of the welding pool surface. The effectiveness and accuracy of the reconstruction of welding pool surfaces were verified by experimental results. This research proposes the development of binocular vision algorithms that can reconstruct the surface of welding pools accurately to realize intelligent welding control systems in the future.


Robotica ◽  
2007 ◽  
Vol 25 (5) ◽  
pp. 615-626 ◽  
Author(s):  
Wen-Chung Chang

SUMMARYRobotic manipulators that have interacted with uncalibrated environments typically have limited positioning and tracking capabilities, if control tasks cannot be appropriately encoded using available features in the environments. Specifically, to perform 3-D trajectory following operations employing binocular vision, it seems necessary to have a priori knowledge on pointwise correspondence information between two image planes. However, such an assumption cannot be made for any smooth 3-D trajectories. This paper describes how one might enhance autonomous robotic manipulation for 3-D trajectory following tasks using eye-to-hand binocular visual servoing. Based on a novel encoded error, an image-based feedback control law is proposed without assuming pointwise binocular correspondence information. The proposed control approach can guarantee task precision by employing only an approximately calibrated binocular vision system. The goal of the autonomous task is to drive a tool mounted on the end-effector of the robotic manipulator to follow a visually determined smooth 3-D target trajectory in desired speed with precision. The proposed control architecture is suitable for applications that require precise 3-D positioning and tracking in unknown environments. Our approach is successfully validated in a real task environment by performing experiments with an industrial robotic manipulator.


Sign in / Sign up

Export Citation Format

Share Document