robotic grasping
Recently Published Documents


TOTAL DOCUMENTS

318
(FIVE YEARS 141)

H-INDEX

24
(FIVE YEARS 5)

Author(s):  
Hanbo Zhang ◽  
Deyu Yang ◽  
Han Wang ◽  
Binglei Zhao ◽  
Xuguang Lan ◽  
...  
Keyword(s):  

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Yongxiang Wu ◽  
Yili Fu ◽  
Shuguo Wang

Purpose This paper aims to use fully convolutional network (FCN) to predict pixel-wise antipodal grasp affordances for unknown objects and improve the grasp detection performance through multi-scale feature fusion. Design/methodology/approach A modified FCN network is used as the backbone to extract pixel-wise features from the input image, which are further fused with multi-scale context information gathered by a three-level pyramid pooling module to make more robust predictions. Based on the proposed unify feature embedding framework, two head networks are designed to implement different grasp rotation prediction strategies (regression and classification), and their performances are evaluated and compared with a defined point metric. The regression network is further extended to predict the grasp rectangles for comparisons with previous methods and real-world robotic grasping of unknown objects. Findings The ablation study of the pyramid pooling module shows that the multi-scale information fusion significantly improves the model performance. The regression approach outperforms the classification approach based on same feature embedding framework on two data sets. The regression network achieves a state-of-the-art accuracy (up to 98.9%) and speed (4 ms per image) and high success rate (97% for household objects, 94.4% for adversarial objects and 95.3% for objects in clutter) in the unknown object grasping experiment. Originality/value A novel pixel-wise grasp affordance prediction network based on multi-scale feature fusion is proposed to improve the grasp detection performance. Two prediction approaches are formulated and compared based on the proposed framework. The proposed method achieves excellent performances on three benchmark data sets and real-world robotic grasping experiment.


2021 ◽  
Vol 13 (24) ◽  
pp. 13686
Author(s):  
Marwan Qaid Mohammed ◽  
Lee Chung Kwek ◽  
Shing Chyi Chua ◽  
Abdulaziz Salamah Aljaloud ◽  
Arafat Al-Dhaqm ◽  
...  

In robotic manipulation, object grasping is a basic yet challenging task. Dexterous grasping necessitates intelligent visual observation of the target objects by emphasizing the importance of spatial equivariance to learn the grasping policy. In this paper, two significant challenges associated with robotic grasping in both clutter and occlusion scenarios are addressed. The first challenge is the coordination of push and grasp actions, in which the robot may occasionally fail to disrupt the arrangement of the objects in a well-ordered object scenario. On the other hand, when employed in a randomly cluttered object scenario, the pushing behavior may be less efficient, as many objects are more likely to be pushed out of the workspace. The second challenge is the avoidance of occlusion that occurs when the camera itself is entirely or partially occluded during a grasping action. This paper proposes a multi-view change observation-based approach (MV-COBA) to overcome these two problems. The proposed approach is divided into two parts: 1) using multiple cameras to set up multiple views to address the occlusion issue; and 2) using visual change observation on the basis of the pixel depth difference to address the challenge of coordinating push and grasp actions. According to experimental simulation findings, the proposed approach achieved an average grasp success rate of 83.6%, 86.3%, and 97.8% in the cluttered, well-ordered object, and occlusion scenarios, respectively.


2021 ◽  
Author(s):  
Yanmin Wu ◽  
Yunzhou Zhang ◽  
Delong Zhu ◽  
Xin Chen ◽  
Sonya Coleman ◽  
...  
Keyword(s):  

2021 ◽  
Author(s):  
Sachith Dewthilina Liyanage ◽  
Abdul Md Mazid ◽  
Pavel Dzitac

Author(s):  
Anil Kurkcu ◽  
Cihan Acar ◽  
Domenico Campolo ◽  
Keng Peng Tee

Author(s):  
Munhyeong Kim ◽  
Sungho Kim
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document