Development of Structured Light Based Bin–Picking System Using Primitive Models

Author(s):  
Jong-Kyu Oh ◽  
KyeongKeun Baek ◽  
Daesik Kim ◽  
Sukhan Lee
2014 ◽  
Vol 625 ◽  
pp. 496-504 ◽  
Author(s):  
Wen Chung Chang ◽  
Chia Hung Wu

In this research, an automated robotic bin-picking system employing active vision for picking up randomly distributed plumbing parts is presented. This system employs an actively-controlled single eye-in-hand system to observe structured light projected onto a set of plumbing parts in a bin. By using image processing and iterative closest point (ICP) algorithms, a single plumbing part that could possibly be taken from the bin is detected. Specifically, by projecting stationary structured light patterns onto the set of plumbing objects, the features on the surfaces of plumbing parts can be reconstructed by actively moving the eye-in-hand camera while performing visual tracking of those features. An effective 3D segmentation technique is employed to extract the point cloud of a single plumbing part that can possibly be grasped successfully. Once the object point cloud is obtained, one needs to determine the coordinate transformation from the end-effector to the selected plumbing part for grasping motion. With the point cloud matching result based on utilizing the ICP algorithm, the position and orientation of the selected plumbing part can be correctly estimated if the deviation of the object point cloud from the model point cloud is small. The control command can thus be given to the robotic manipulator to accomplish the automated bin-picking task. To effectively expand the allowed deviation of the object point cloud, an approximate pose estimation algorithm is employed before performing the ICP algorithm. The proposed approach can virtually estimate any pose of the plumbing part and has been successfully experimented with an industrial manipulator equipped with eye-in-hand single-camera vision and a LCD projector fixed in the work space demonstrating the feasibility and effectiveness. The proposed automated bin-picking system appears to be cost-effective and have great potentials in industrial factory automation applications.


ICCAS 2010 ◽  
2010 ◽  
Author(s):  
Jong-Kyu Oh ◽  
Chan-Ho Lee ◽  
Sang-Hun Lee ◽  
Sung-Hyun Jung ◽  
Dasik Kim ◽  
...  

Author(s):  
Jiaxin Guo ◽  
Lian Fu ◽  
Mingkai Jia ◽  
Kaijun Wang ◽  
Shan Liu
Keyword(s):  

Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 706 ◽  
Author(s):  
Ping Jiang ◽  
Yoshiyuki Ishihara ◽  
Nobukatsu Sugiyama ◽  
Junji Oaki ◽  
Seiji Tokura ◽  
...  

Bin-picking of small parcels and other textureless planar-faced objects is a common task at warehouses. A general color image–based vision-guided robot picking system requires feature extraction and goal image preparation of various objects. However, feature extraction for goal image matching is difficult for textureless objects. Further, prior preparation of huge numbers of goal images is impractical at a warehouse. In this paper, we propose a novel depth image–based vision-guided robot bin-picking system for textureless planar-faced objects. Our method uses a deep convolutional neural network (DCNN) model that is trained on 15,000 annotated depth images synthetically generated in a physics simulator to directly predict grasp points without object segmentation. Unlike previous studies that predicted grasp points for a robot suction hand with only one vacuum cup, our DCNN also predicts optimal grasp patterns for a hand with two vacuum cups (left cup on, right cup on, or both cups on). Further, we propose a surface feature descriptor to extract surface features (center position and normal) and refine the predicted grasp point position, removing the need for texture features for vision-guided robot control and sim-to-real modification for DCNN model training. Experimental results demonstrate the efficiency of our system, namely that a robot with 7 degrees of freedom can pick randomly posed textureless boxes in a cluttered environment with a 97.5% success rate at speeds exceeding 1000 pieces per hour.


Sign in / Sign up

Export Citation Format

Share Document