grasp type
Recently Published Documents


TOTAL DOCUMENTS

35
(FIVE YEARS 13)

H-INDEX

9
(FIVE YEARS 3)

Cortex ◽  
2021 ◽  
Vol 139 ◽  
pp. 152-165
Author(s):  
Fredrik Bergström ◽  
Moritz Wurm ◽  
Daniela Valério ◽  
Angelika Lingnau ◽  
Jorge Almeida
Keyword(s):  

Ergonomics ◽  
2020 ◽  
Vol 63 (11) ◽  
pp. 1414-1424
Author(s):  
Xiaojing Chen ◽  
Zhiguo Li ◽  
Yuqing Wang
Keyword(s):  

2020 ◽  
Author(s):  
Anisha Rastogi ◽  
Francis R. Willett ◽  
Jessica Abreu ◽  
Douglas C. Crowder ◽  
Brian A. Murphy ◽  
...  

AbstractIntracortical brain-computer interfaces (iBCIs) have the potential to restore hand grasping and object interaction to individuals with tetraplegia. Optimal grasping and object interaction require simultaneous production of both force and grasp outputs. However, since overlapping neural populations are modulated by both parameters, grasp type could affect how well forces are decoded from motor cortex in a closed-loop force iBCI. Therefore, this work quantified the neural representation and offline decoding performance of discrete hand grasps and force levels in two participants with tetraplegia. Participants attempted to produce three discrete forces (light, medium, hard) using up to five hand grasp configurations. A two-way Welch ANOVA was implemented on multiunit neural features to assess their modulation to force and grasp. Demixed principal component analysis was used to assess for population-level tuning to force and grasp and to predict these parameters from neural activity. Three major findings emerged from this work: 1) Force information was neurally represented and could be decoded across multiple hand grasps (and, in one participant, across attempted elbow extension as well); 2) Grasp type affected force representation within multi-unit neural features and offline force classification accuracy; and 3) Grasp was classified more accurately and had greater population-level representation than force. These findings suggest that force and grasp have both independent and interacting representations within cortex, and that incorporating force control into real-time iBCI systems is feasible across multiple hand grasps if the decoder also accounts for grasp type.Significance StatementIntracortical brain-computer interfaces (iBCIs) have emerged as a promising technology to potentially restore hand grasping and object interaction in people with tetraplegia. This study is among the first to quantify the degree to which hand grasp affects force-related – or kinetic – neural activity and decoding performance in individuals with tetraplegia. The study results enhance our overall understanding of how the brain encodes kinetic parameters across varying kinematic behaviors -- and in particular, the degree to which these parameters have independent versus interacting neural representations. Such investigations are a critical first step to incorporating force control into human-operated iBCI systems, which would move the technology towards restoring more functional and naturalistic tasks.


2020 ◽  
Vol 17 (02) ◽  
pp. 2050008 ◽  
Author(s):  
Julia Starke ◽  
Christian Eichmann ◽  
Simon Ottenhaus ◽  
Tamim Asfour

The human hand is a complex, highly-articulated system, which has been the source of inspiration in designing humanoid robotic and prosthetic hands. Understanding the functionality of the human hand is crucial for the design, efficient control and transfer of human versatility and dexterity to such anthropomorphic robotic hands. Although research in this area has made significant advances, the synthesis of grasp configurations, based on observed human grasping data, is still an unsolved and challenging task. In this work we derive a novel, constrained autoencoder model, that encodes human grasping data in a compact representation. This representation encodes both the grasp type in a three-dimensional latent space and the object size as an explicit parameter constraint allowing the direct synthesis of object-specific grasps. We train the model on 2250 grasps generated by 15 subjects using 35 diverse objects from the KIT and YCB object sets. In the evaluation we show that the synthesized grasp configurations are human-like and have a high probability of success under pose uncertainty.


2019 ◽  
Vol 16 (06) ◽  
pp. 1950041 ◽  
Author(s):  
Jan Rosell ◽  
Raúl Suárez ◽  
Néstor García ◽  
Muhayy Ud Din

This paper addresses the problem of obtaining the required motions for a humanoid robot to perform grasp actions trying to mimic the coordinated hand–arm movements humans do. The first step is the data acquisition and analysis, which consists in capturing human movements while grasping several everyday objects (covering four possible grasp types), mapping them to the robot and computing the hand motion synergies for the pre-grasp and grasp phases (per grasp type). Then, the grasp and motion synthesis step is done, which consists in generating potential grasps for a given object using the four family types, and planning the motions using a bi-directional multi-goal sampling-based planner, which efficiently guides the motion planning following the synergies in a reduced search space, resulting in paths with human-like appearance. The approach has been tested in simulation, thoroughly compared with other state-of-the-art planning algorithms obtaining better results, and also implemented in a real robot.


2019 ◽  
Vol 4 (2) ◽  
pp. 784-791 ◽  
Author(s):  
Qingkai Lu ◽  
Tucker Hermans
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document