scholarly journals Antipodal Robotic Grasping using Generative Residual Convolutional Neural Network

Author(s):  
Sulabh Kumra ◽  
Shirin Joshi ◽  
Ferat Sahin
2021 ◽  
Author(s):  
Jianhao Fang ◽  
Weifei Hu ◽  
Chuxuan Wang ◽  
Zhenyu Liu ◽  
Jianrong Tan

Abstract Robotic grasping is an important task for various industrial applications. However, combining detecting and grasping to perform a dynamic and efficient object moving is still a challenge for robotic grasping. Meanwhile, it is time consuming for robotic algorithm training and testing in realistic. Here we present a framework for dynamic robotic grasping based on deep Q-network (DQN) in a virtual grasping space. The proposed dynamic robotic grasping framework mainly consists of the DQN, the convolutional neural network (CNN), and the virtual model of robotic grasping. After observing the result generated by applying the generative grasping convolutional neural network (GG-CNN), a robotic manipulation conducts actions according to Q-network. Different actions generate different rewards, which are implemented to update the neural network through loss function. The goal of this method is to find a reasonable strategy to optimize the total reward and finally accomplish a dynamic grasping process. In the test of virtual space, we achieve an 85.5% grasp success rate on a set of previously unseen objects, which demonstrates the accuracy of DQN enhanced GG-CNN model. The experimental results show that the DQN can efficiently enhance the GG-CNN by considering the grasping procedure (i.e. the grasping time and the gripper’s posture), which makes the grasping procedure stable and increases the success rate of robotic grasping.


2020 ◽  
Author(s):  
S Kashin ◽  
D Zavyalov ◽  
A Rusakov ◽  
V Khryashchev ◽  
A Lebedev

2020 ◽  
Vol 2020 (10) ◽  
pp. 181-1-181-7
Author(s):  
Takahiro Kudo ◽  
Takanori Fujisawa ◽  
Takuro Yamaguchi ◽  
Masaaki Ikehara

Image deconvolution has been an important issue recently. It has two kinds of approaches: non-blind and blind. Non-blind deconvolution is a classic problem of image deblurring, which assumes that the PSF is known and does not change universally in space. Recently, Convolutional Neural Network (CNN) has been used for non-blind deconvolution. Though CNNs can deal with complex changes for unknown images, some CNN-based conventional methods can only handle small PSFs and does not consider the use of large PSFs in the real world. In this paper we propose a non-blind deconvolution framework based on a CNN that can remove large scale ringing in a deblurred image. Our method has three key points. The first is that our network architecture is able to preserve both large and small features in the image. The second is that the training dataset is created to preserve the details. The third is that we extend the images to minimize the effects of large ringing on the image borders. In our experiments, we used three kinds of large PSFs and were able to observe high-precision results from our method both quantitatively and qualitatively.


Sign in / Sign up

Export Citation Format

Share Document