Robotic grasp detection using deep learning and geometry model of soft hand

Author(s):  
Hong-Ying Wang ◽  
Wing-Kuen Ling
2018 ◽  
Vol 2 (3) ◽  
pp. 57 ◽  
Author(s):  
Shehan Caldera ◽  
Alexander Rassau ◽  
Douglas Chai

For robots to attain more general-purpose utility, grasping is a necessary skill to master. Such general-purpose robots may use their perception abilities to visually identify grasps for a given object. A grasp describes how a robotic end-effector can be arranged to securely grab an object and successfully lift it without slippage. Traditionally, grasp detection requires expert human knowledge to analytically form the task-specific algorithm, but this is an arduous and time-consuming approach. During the last five years, deep learning methods have enabled significant advancements in robotic vision, natural language processing, and automated driving applications. The successful results of these methods have driven robotics researchers to explore the use of deep learning methods in task-generalised robotic applications. This paper reviews the current state-of-the-art in regards to the application of deep learning methods to generalised robotic grasping and discusses how each element of the deep learning approach has improved the overall performance of robotic grasp detection. Several of the most promising approaches are evaluated and the most suitable for real-time grasp detection is identified as the one-shot detection method. The availability of suitable volumes of appropriate training data is identified as a major obstacle for effective utilisation of the deep learning approaches, and the use of transfer learning techniques is proposed as a potential mechanism to address this. Finally, current trends in the field and future potential research directions are discussed.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Xuan Zhao ◽  
Hancheng Yu ◽  
Mingkui Feng ◽  
Gang Sun

Purpose Robot automatic grasping has important application value in industrial applications. Recent works have explored on the performance of deep learning for robotic grasp detection. They usually use oriented anchor boxes (OABs) as detection prior and achieve better performance than previous works. However, the parameters of their loss belong to different coordinates, this may affect the regression accuracy. This paper aims to propose an oriented regression loss to solve the problem of inconsistency among the loss parameters. Design/methodology/approach In the oriented loss, the center coordinates errors between the ground truth grasp rectangle and the predicted grasp rectangle rotate to the vertical and horizontal of the OAB. And then the direction error is used as an orientation factor, combining with the errors of the rotated center coordinates, width and height of the predicted grasp rectangle. Findings The proposed oriented regression loss is evaluated on the YOLO-v3 framework to the grasp detection task. It yields state-of-the-art performance with an accuracy of 98.8% and a speed of 71 frames per second with GTX 1080Ti on Cornell datasets. Originality/value This paper proposes an oriented loss to improve the regression accuracy of deep learning for grasp detection. The authors apply the proposed deep grasp network to the visual servo intelligent crane. The experimental result indicates that the approach is accurate and robust enough for real-time grasping applications.


Author(s):  
Shehan Caldera ◽  
Alexander Rassau ◽  
Douglas Chai

In order for robots to attain more general-purpose utility, grasping is a necessary skill to master. Such general-purpose robots may use their perception abilities in order to visually identify grasps for a given object. A grasp describes how a robotic end-effector can be arranged on top of an object to securely grab it between the robotic gripper and successfully lift it without slippage. Traditionally, grasp detection requires expert human knowledge to analytically form the task-specific algorithm, but this is an arduous and time-consuming approach. During the last five years, deep learning methods have enabled significant advancements in robotic vision, natural language processing, and automated driving applications. The successful results of these methods have driven robotics researchers to explore the application of deep learning methods in task generalised robotic applications. This paper reviews the current state-of-the-art in regards to the application of deep learning methods to generalised robotic grasping and discusses how each element of the deep learning approach has improved the overall performance of robotic grasp detection. A number of the most promising approaches are evaluated and the most successful for grasp detection is identified as the one-shot detection method. The availability of suitable volumes of appropriate training data is identified as a major obstacle for effective utilisation of the deep learning approaches, and the use of transfer learning techniques is identified as a potential mechanism to address this. Finally, current trends in the field and future potential research directions are discussed.


Author(s):  
Stellan Ohlsson
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document