A CNN-Based Grasp Planning Method for Random Picking of Unknown Objects with a Vacuum Gripper

2021 ◽  
Vol 103 (4) ◽  
Author(s):  
Hui Zhang ◽  
Jef Peeters ◽  
Eric Demeester ◽  
Karel Kellens
2013 ◽  
Vol 18 (3) ◽  
pp. 1050-1059 ◽  
Author(s):  
Vincenzo Lippiello ◽  
Fabio Ruggiero ◽  
Bruno Siciliano ◽  
Luigi Villani

Author(s):  
Weidong Guo ◽  
Mileta M. Tomovic ◽  
Jiting Li

The paper presents method for planning robotic dexterous hand grasping task using example of the Beihang University’s BH-4 dexterous hand. The grasping planning method is devised through modeling and simulation and experimentally verified using physical prototype. The paper presents the method for forward and inverse kinematic solutions of the BH-4 robot 4-DOF finger, including transformation matrix between the palm coordinate system and the finger base coordinate system. In addition, the method of the idiographic manipulation is presented using example of ball grasping. The simulation results and physical experiment verify that the inverse kinematic solution is correct, and kinematic grasping and operating planning is valid and feasible. Finally, the experiment with the complex system integrated robot arm with dexterous hand is carried out. Experimental result shows that the more complicated grasping task can be done by a dexterous hand integrated in the robot arm system.


2017 ◽  
Vol 14 (1) ◽  
pp. 172988141668713 ◽  
Author(s):  
Peng Jia ◽  
Wei li Li ◽  
Gang Wang ◽  
Song Yu Li

A grasp planning method based on the volume and flattening of a generalized force ellipsoid is proposed to improve the grasping ability of a dexterous robotic hand. First, according to the general solution of joint torques for a dexterous robotic hand, a grasping indicator for the dexterous hand—the maximum volume of a generalized external force ellipsoid and the minimum volume of a generalized contact internal force ellipsoid during accepted flattening—is proposed. Second, an optimal grasp planning method based on a task is established using the grasping indicator as an objective function. Finally, a simulation analysis and grasping experiment are performed. Results show that when the grasping experiment is conducted with the grasping configuration and positions of contact points optimized using the proposed grasping indicator, the root-mean-square values of the joint torques and contact internal forces of the dexterous hand are at a minimum. The effectiveness of the proposed grasping planning method is thus demonstrated.


Robotica ◽  
2008 ◽  
Vol 26 (3) ◽  
pp. 331-344 ◽  
Author(s):  
Shahram Salimi ◽  
Gary M. Bone

SUMMARYThree-dimensional (3D) enveloping grasps for dexterous robotic hands possess several advantages over other types of grasps. This paper describes a new method for kinematic 3D enveloping grasp planning. A new idea for grading the 3D grasp search domain for a given object is proposed. The grading method analyzes the curvature pattern and effective diameter of the object, and grades object regions according to their suitability for grasping. A new approach is also proposed for modeling the fingers of the dexterous hand. The grasp planning method is demonstrated for a three-fingered, six degrees-of-freedom, dexterous hand and several 3D objects containing both convex and concave surface patches. Human-like high-quality grasps are generated in less than 20 s per object.


Author(s):  
Xiaoqian Huang ◽  
Mohamad Halwani ◽  
Rajkumar Muthusamy ◽  
Abdulla Ayyad ◽  
Dewald Swart ◽  
...  

AbstractRobotic vision plays a key role for perceiving the environment in grasping applications. However, the conventional framed-based robotic vision, suffering from motion blur and low sampling rate, may not meet the automation needs of evolving industrial requirements. This paper, for the first time, proposes an event-based robotic grasping framework for multiple known and unknown objects in a cluttered scene. With advantages of microsecond-level sampling rate and no motion blur of event camera, the model-based and model-free approaches are developed for known and unknown objects’ grasping respectively. The event-based multi-view approach is used to localize the objects in the scene in the model-based approach, and then point cloud processing is utilized to cluster and register the objects. The proposed model-free approach, on the other hand, utilizes the developed event-based object segmentation, visual servoing and grasp planning to localize, align to, and grasp the targeting object. Using a UR10 robot with an eye-in-hand neuromorphic camera and a Barrett hand gripper, the proposed approaches are experimentally validated with objects of different sizes. Furthermore, it demonstrates robustness and a significant advantage over grasping with a traditional frame-based camera in low-light conditions.


Sign in / Sign up

Export Citation Format

Share Document