Grasp planning

Author(s):  
Bartholomew O. Nnaji
Keyword(s):  
2010 ◽  
Vol 76 (762) ◽  
pp. 331-339
Author(s):  
Kensuke HARADA ◽  
Tokuo TSUJI ◽  
Kenji KANEKO ◽  
Fumio KANEHIRO ◽  
Kenichi MARUYAMA

Author(s):  
Robert Krug ◽  
Todor Stoyanov ◽  
Manuel Bonilla ◽  
Vinicio Tincani ◽  
Narunas Vaskevicius ◽  
...  

Author(s):  
Yao Li ◽  
T. Kesavadas

Abstract One of the expectations for the next generation of industrial robots is to work collaboratively with humans as robotic co-workers. Robotic co-workers must be able to communicate with human collaborators intelligently and seamlessly. However, industrial robots in prevalence are not good at understanding human intentions and decisions. We demonstrate a steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) which can directly deliver human cognition to robots through a headset. The BCI is applied to a part-picking robot and sends decisions to the robot while operators visually inspecting the quality of parts. The BCI is verified through a human subject study. In the study, a camera by the side of the conveyor takes photos of each part and presents it to the operator automatically. When the operator looks at the photo, the electroencephalography (EEG) is collected through BCI. The inspection decision is extracted through SSVEPs in EEG. When a defective part is identified by the operator, the signal is communicated to the robot which locates the defective part through a second camera and removes it from the conveyor. The robot can grasp various part with our grasp planning algorithm (2FRG). We have developed a CNN-CCA model for SSVEP extraction. The model is trained on a dataset collected in our offline experiment. Our approach outperforms the existing CCA, CCA-SVM, and PSD-SVM models. The CNN-CCA is further validated in an online experiment that achieves 93% accuracy in identifying and removing a defective part.


2011 ◽  
Vol 08 (04) ◽  
pp. 761-775 ◽  
Author(s):  
ZHIXING XUE ◽  
RUEDIGER DILLMANN

Grasping can be seen as two steps: placing the hand at a grasping pose and closing the fingers. In this paper, we introduce an efficient algorithm for grasping pose generation. By the use of preshaping and eigen-grasping actions, the dimension of the space of possible hand configurations is reduced. The object to be grasped is decomposed into boxes of a discrete set of different sizes. By performing finger reachability analysis on the boxes, the kinematic feasibility of a grasp can be determined. If a reachable grasp is force-closure and can be performed by the robotic arm, its grasping forces are optimized and can be executed. The novelty of our algorithm is that it takes into account both the object geometrical information and the kinematic information of the hand to determine the grasping pose, so that a reachable grasping pose can be found very quickly. Real experiments with two different robotic hands show the efficiency and feasibility of our method.


Sign in / Sign up

Export Citation Format

Share Document