Robotic Grasp Pose Detection Using Deep Learning

Author(s):  
Shehan Caldera ◽  
Alexander Rassau ◽  
Douglas Chai
Author(s):  
Kotaro MAYUMI ◽  
Takayuki MATSUNO ◽  
Tetsushi KAMEGAWA ◽  
Takao HIRAKI ◽  
Yuichiro TODA ◽  
...  

2021 ◽  
Author(s):  
S. Sankara Narayanan ◽  
Devendra Kumar Misra ◽  
Kartik Arora ◽  
Harsh Rai

2018 ◽  
Vol 2 (3) ◽  
pp. 57 ◽  
Author(s):  
Shehan Caldera ◽  
Alexander Rassau ◽  
Douglas Chai

For robots to attain more general-purpose utility, grasping is a necessary skill to master. Such general-purpose robots may use their perception abilities to visually identify grasps for a given object. A grasp describes how a robotic end-effector can be arranged to securely grab an object and successfully lift it without slippage. Traditionally, grasp detection requires expert human knowledge to analytically form the task-specific algorithm, but this is an arduous and time-consuming approach. During the last five years, deep learning methods have enabled significant advancements in robotic vision, natural language processing, and automated driving applications. The successful results of these methods have driven robotics researchers to explore the use of deep learning methods in task-generalised robotic applications. This paper reviews the current state-of-the-art in regards to the application of deep learning methods to generalised robotic grasping and discusses how each element of the deep learning approach has improved the overall performance of robotic grasp detection. Several of the most promising approaches are evaluated and the most suitable for real-time grasp detection is identified as the one-shot detection method. The availability of suitable volumes of appropriate training data is identified as a major obstacle for effective utilisation of the deep learning approaches, and the use of transfer learning techniques is proposed as a potential mechanism to address this. Finally, current trends in the field and future potential research directions are discussed.


2017 ◽  
Vol 36 (13-14) ◽  
pp. 1455-1473 ◽  
Author(s):  
Andreas ten Pas ◽  
Marcus Gualtieri ◽  
Kate Saenko ◽  
Robert Platt

Recently, a number of grasp detection methods have been proposed that can be used to localize robotic grasp configurations directly from sensor data without estimating object pose. The underlying idea is to treat grasp perception analogously to object detection in computer vision. These methods take as input a noisy and partially occluded RGBD image or point cloud and produce as output pose estimates of viable grasps, without assuming a known CAD model of the object. Although these methods generalize grasp knowledge to new objects well, they have not yet been demonstrated to be reliable enough for wide use. Many grasp detection methods achieve grasp success rates (grasp successes as a fraction of the total number of grasp attempts) between 75% and 95% for novel objects presented in isolation or in light clutter. Not only are these success rates too low for practical grasping applications, but the light clutter scenarios that are evaluated often do not reflect the realities of real-world grasping. This paper proposes a number of innovations that together result in an improvement in grasp detection performance. The specific improvement in performance due to each of our contributions is quantitatively measured either in simulation or on robotic hardware. Ultimately, we report a series of robotic experiments that average a 93% end-to-end grasp success rate for novel objects presented in dense clutter.


2018 ◽  
Vol 8 (7) ◽  
pp. 1081 ◽  
Author(s):  
Jaishankar Bharatharaj ◽  
Loulin Huang ◽  
Rajesh Mohan ◽  
Thejus Pathmakumar ◽  
Chris Krägeloh ◽  
...  

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Xuan Zhao ◽  
Hancheng Yu ◽  
Mingkui Feng ◽  
Gang Sun

Purpose Robot automatic grasping has important application value in industrial applications. Recent works have explored on the performance of deep learning for robotic grasp detection. They usually use oriented anchor boxes (OABs) as detection prior and achieve better performance than previous works. However, the parameters of their loss belong to different coordinates, this may affect the regression accuracy. This paper aims to propose an oriented regression loss to solve the problem of inconsistency among the loss parameters. Design/methodology/approach In the oriented loss, the center coordinates errors between the ground truth grasp rectangle and the predicted grasp rectangle rotate to the vertical and horizontal of the OAB. And then the direction error is used as an orientation factor, combining with the errors of the rotated center coordinates, width and height of the predicted grasp rectangle. Findings The proposed oriented regression loss is evaluated on the YOLO-v3 framework to the grasp detection task. It yields state-of-the-art performance with an accuracy of 98.8% and a speed of 71 frames per second with GTX 1080Ti on Cornell datasets. Originality/value This paper proposes an oriented loss to improve the regression accuracy of deep learning for grasp detection. The authors apply the proposed deep grasp network to the visual servo intelligent crane. The experimental result indicates that the approach is accurate and robust enough for real-time grasping applications.


Sign in / Sign up

Export Citation Format

Share Document