The Location of Quadruped Robot with Hand-Fused-Foot Based on Binocular Vision

2013 ◽  
Vol 694-697 ◽  
pp. 1925-1930
Author(s):  
Xin Jie Wang ◽  
Zhi Lin Yang ◽  
Jie Liu

Robot location is a key technology of quadruped robot with Hand-fused-Foot. The location method based on binocular vision system is studied for quadruped robot with Hand-fused-Foot. After obtaining image by a single camera, the object is segmented by using characteristic extraction method based on color characteristic. Then image processing such as filtering (de-noising) and opening is performed. The object is identified and its centroid coordinate in image is obtained. Location of robot based on environment reference--object coordinate is achieved. Experiments show the effectiveness and the accuracy (within 4cm) of the method.

2012 ◽  
Vol 522 ◽  
pp. 634-637
Author(s):  
Ke Yin Chen ◽  
Xiang Jun Zou ◽  
Li Juan Chen

In the picking robot binocular vision systems research, the camera calibration is often an indispensable step and these basements to locate the target of the object and rebuild the three-dimensional construction based on the robot stereo vision for the follow-up study. So, searching for a high accuracy and simple camera calibration algorithm is of great significance and necessary. However, For most of these camera calibration algorithms, it is necessary to establish a reference object, namely the target, in front of the camera at present, but posing the target is very not convenient or almost impossible in some cases. Therefore, a picking robot online calibration algorithm based on the vision scene was proposed by studying the work environment characteristics of the picking robot binocular vision system and the invariant projective geometry. The experimental results showed that this algorithm’s calibration accuracy and precision good meets to the requirement of the robot binocular vision system camera calibration in the complex environment.


Robotica ◽  
2007 ◽  
Vol 25 (5) ◽  
pp. 615-626 ◽  
Author(s):  
Wen-Chung Chang

SUMMARYRobotic manipulators that have interacted with uncalibrated environments typically have limited positioning and tracking capabilities, if control tasks cannot be appropriately encoded using available features in the environments. Specifically, to perform 3-D trajectory following operations employing binocular vision, it seems necessary to have a priori knowledge on pointwise correspondence information between two image planes. However, such an assumption cannot be made for any smooth 3-D trajectories. This paper describes how one might enhance autonomous robotic manipulation for 3-D trajectory following tasks using eye-to-hand binocular visual servoing. Based on a novel encoded error, an image-based feedback control law is proposed without assuming pointwise binocular correspondence information. The proposed control approach can guarantee task precision by employing only an approximately calibrated binocular vision system. The goal of the autonomous task is to drive a tool mounted on the end-effector of the robotic manipulator to follow a visually determined smooth 3-D target trajectory in desired speed with precision. The proposed control architecture is suitable for applications that require precise 3-D positioning and tracking in unknown environments. Our approach is successfully validated in a real task environment by performing experiments with an industrial robotic manipulator.


2014 ◽  
Vol 22 (8) ◽  
pp. 9134 ◽  
Author(s):  
Yi Cui ◽  
Fuqiang Zhou ◽  
Yexin Wang ◽  
Liu Liu ◽  
He Gao

Sign in / Sign up

Export Citation Format

Share Document