grasp synthesis
Recently Published Documents


TOTAL DOCUMENTS

90
(FIVE YEARS 15)

H-INDEX

17
(FIVE YEARS 2)

2022 ◽  
Vol 168 ◽  
pp. 104575
Author(s):  
Ali Mehrkish ◽  
Farrokh Janabi-Sharifi

2021 ◽  
pp. 1-13
Author(s):  
Reiko Takahashi ◽  
Natsuki Miyata ◽  
Yusuke Maeda ◽  
Yuta Nakanishi

2021 ◽  
Vol 8 ◽  
Author(s):  
Sabhari Natarajan ◽  
Galen Brown ◽  
Berk Calli

In this work, we present several heuristic-based and data-driven active vision strategies for viewpoint optimization of an arm-mounted depth camera to aid robotic grasping. These strategies aim to efficiently collect data to boost the performance of an underlying grasp synthesis algorithm. We created an open-source benchmarking platform in simulation (https://github.com/galenbr/2021ActiveVision), and provide an extensive study for assessing the performance of the proposed methods as well as comparing them against various baseline strategies. We also provide an experimental study with a real-world two finger parallel jaw gripper setup by utilizing an existing grasp planning benchmark in the literature. With these analyses, we were able to quantitatively demonstrate the versatility of heuristic methods that prioritize certain types of exploration, and qualitatively show their robustness to both novel objects and the transition from simulation to the real world. We identified scenarios in which our methods did not perform well and objectively difficult scenarios, and present a discussion on which avenues for future research show promise.


2021 ◽  
Vol 102 (3) ◽  
Author(s):  
Jacques Janse van Vuuren ◽  
Liqiong Tang ◽  
Ibrahim Al-Bahadly ◽  
Khalid Mahmood Arif
Keyword(s):  

2021 ◽  
Vol 67 ◽  
pp. 102032
Author(s):  
João Pedro Carvalho de Souza ◽  
Carlos M. Costa ◽  
Luís F. Rocha ◽  
Rafael Arrais ◽  
A. Paulo Moreira ◽  
...  

Author(s):  
Yuta NAKANISHI ◽  
Reiko TAKAHASHI ◽  
Natsuki MIYATA ◽  
Yusuke MAEDA

2020 ◽  
Vol 17 (2) ◽  
pp. 172988142092110
Author(s):  
A Hui Wei ◽  
B Yang Chen

In this article, a novel, efficient grasp synthesis method is introduced that can be used for closed-loop robotic grasping. Using only a single monocular camera, the proposed approach can detect contour information from an image in real time and then determine the precise position of an object to be grasped by matching its contour with a given template. This approach is much lighter than the currently prevailing methods, especially vision-based deep-learning techniques, in that it requires no prior training. With the use of the state-of-the-art techniques of edge detection, superpixel segmentation, and shape matching, our visual servoing method does not rely on accurate camera calibration or position control and is able to adapt to dynamic environments. Experiments show that the approach provides high levels of compliance, performance, and robustness under diverse experiment environments.


Sign in / Sign up

Export Citation Format

Share Document