scholarly journals Vision Based Bin-Picking System Supported by Three Dimensional Circle Detection and Previously Collision Avoidance.

2000 ◽  
Vol 18 (7) ◽  
pp. 995-1002 ◽  
Author(s):  
Toshikazu Onda ◽  
Nobuyuki Fujiwara ◽  
Kiyohide Abe ◽  
Nobuhito Mori
Author(s):  
Jiaxin Guo ◽  
Lian Fu ◽  
Mingkai Jia ◽  
Kaijun Wang ◽  
Shan Liu
Keyword(s):  

Author(s):  
Jun Tang ◽  
Jiayi Sun ◽  
Cong Lu ◽  
Songyang Lao

Multi-unmanned aerial vehicle trajectory planning is one of the most complex global optimum problems in multi-unmanned aerial vehicle coordinated control. Results of recent research works on trajectory planning reveal persisting theoretical and practical problems. To mitigate them, this paper proposes a novel optimized artificial potential field algorithm for multi-unmanned aerial vehicle operations in a three-dimensional dynamic space. For all purposes, this study considers the unmanned aerial vehicles and obstacles as spheres and cylinders with negative electricity, respectively, while the targets are considered spheres with positive electricity. However, the conventional artificial potential field algorithm is restricted to a single unmanned aerial vehicle trajectory planning in two-dimensional space and usually fails to ensure collision avoidance. To deal with this challenge, we propose a method with a distance factor and jump strategy to resolve common problems such as unreachable targets and ensure that the unmanned aerial vehicle does not collide into the obstacles. The method takes companion unmanned aerial vehicles as the dynamic obstacles to realize collaborative trajectory planning. Besides, the method solves jitter problems using the dynamic step adjustment method and climb strategy. It is validated in quantitative test simulation models and reasonable results are generated for a three-dimensional simulated urban environment.


Author(s):  
Jong-Kyu Oh ◽  
KyeongKeun Baek ◽  
Daesik Kim ◽  
Sukhan Lee

Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 706 ◽  
Author(s):  
Ping Jiang ◽  
Yoshiyuki Ishihara ◽  
Nobukatsu Sugiyama ◽  
Junji Oaki ◽  
Seiji Tokura ◽  
...  

Bin-picking of small parcels and other textureless planar-faced objects is a common task at warehouses. A general color image–based vision-guided robot picking system requires feature extraction and goal image preparation of various objects. However, feature extraction for goal image matching is difficult for textureless objects. Further, prior preparation of huge numbers of goal images is impractical at a warehouse. In this paper, we propose a novel depth image–based vision-guided robot bin-picking system for textureless planar-faced objects. Our method uses a deep convolutional neural network (DCNN) model that is trained on 15,000 annotated depth images synthetically generated in a physics simulator to directly predict grasp points without object segmentation. Unlike previous studies that predicted grasp points for a robot suction hand with only one vacuum cup, our DCNN also predicts optimal grasp patterns for a hand with two vacuum cups (left cup on, right cup on, or both cups on). Further, we propose a surface feature descriptor to extract surface features (center position and normal) and refine the predicted grasp point position, removing the need for texture features for vision-guided robot control and sim-to-real modification for DCNN model training. Experimental results demonstrate the efficiency of our system, namely that a robot with 7 degrees of freedom can pick randomly posed textureless boxes in a cluttered environment with a 97.5% success rate at speeds exceeding 1000 pieces per hour.


Sign in / Sign up

Export Citation Format

Share Document