Dextrous Hands: Human, Prosthetic, and Robotic

1997 ◽  
Vol 6 (1) ◽  
pp. 29-56 ◽  
Author(s):  
Lynette Jones

The sensory and motor capacities of the human hand are reviewed in the context of providing a set of performance characteristics against which prosthetic and dextrous robot hands can be evaluated. The sensors involved in processing tactile, thermal, and proprioceptive (force and movement) information are described, together with details on their spatial densities, sensitivity, and resolution. The wealth of data on the human hand's sensory capacities is not matched by an equivalent database on motor performance. Attempts at quantifying manual dexterity have met with formidable technological difficulties due to the conditions under which many highly trained manual skills are performed. Limitations in technology have affected not only the quantifying of human manual performance but also the development of prosthetic and robotic hands. Most prosthetic hands in use at present are simple grasping devices, and imparting a “natural” sense of touch to these hands remains a challenge. Several dextrous robot hands exist as research tools and even though some of these systems can outperform their human counterparts in the motor domain, they are still very limited as sensory processing systems. It is in this latter area that information from studies of human grasping and processing of object information may make the greatest contribution.

2020 ◽  
Vol 17 (02) ◽  
pp. 2050008 ◽  
Author(s):  
Julia Starke ◽  
Christian Eichmann ◽  
Simon Ottenhaus ◽  
Tamim Asfour

The human hand is a complex, highly-articulated system, which has been the source of inspiration in designing humanoid robotic and prosthetic hands. Understanding the functionality of the human hand is crucial for the design, efficient control and transfer of human versatility and dexterity to such anthropomorphic robotic hands. Although research in this area has made significant advances, the synthesis of grasp configurations, based on observed human grasping data, is still an unsolved and challenging task. In this work we derive a novel, constrained autoencoder model, that encodes human grasping data in a compact representation. This representation encodes both the grasp type in a three-dimensional latent space and the object size as an explicit parameter constraint allowing the direct synthesis of object-specific grasps. We train the model on 2250 grasps generated by 15 subjects using 35 diverse objects from the KIT and YCB object sets. In the evaluation we show that the synthesized grasp configurations are human-like and have a high probability of success under pose uncertainty.


Author(s):  
Yunus Ziya Arslan ◽  
Yuksel Hacioglu ◽  
Yener Taskin ◽  
Nurkan Yagiz

Due to the dexterous manipulation capability and low metabolic energy consumption property of the human hand, many robotic hands were designed and manufactured that are inspired from the human hand. One of the technical challenges in designing biomimetic robot hands is the control scheme. The control algorithm used in a robot hand is expected to ensure the tracking of reference trajectories of fingertips and joint angles with high accuracy, reliability, and smoothness. In this chapter, trajectory-tracking performances of different types of widely used control strategies (i.e. classical, robust, and intelligent controllers) are comparatively evaluated. To accomplish this evaluation, PID, sliding mode, and fuzzy logic controllers are implemented on a biomimetic robot hand finger model and simulation results are quantitatively analyzed. Pros and cons of the corresponding control algorithms are also discussed.


2013 ◽  
Vol 10 (02) ◽  
pp. 1350001 ◽  
Author(s):  
MICHAEL A. SALIBA ◽  
ALISTAIRE CHETCUTI ◽  
MATTHEW J. FARRUGIA

In this work, we take a new approach to the determination of the quantified contribution of various attributes of the human hand to its dexterity, with the aim of transposing this knowledge into supportive guidelines for the design of anthropomorphic robotic and prosthetic hands. We have carried out a number of standard dexterity tests on normal human subjects with various physical constraints applied to selected attributes of their hands, and have analyzed the results of the tests to extract knowledge on the quantified contribution of each attribute to overall manual dexterity. This knowledge is particularly significant in cases where it is important to optimize the trade-off between dexterity and complexity in the design of artificial hands. The data collection was made over 35 h of direct experimentation involving 40 volunteers during two separate runs, and the results represent empirically-derived upper limits on the achievable performance of humanoid robot hands having the specified deficiencies. We discuss the implications of our results in the context of a minimal anthropomorphic dexterous hand, which would incorporate the lowest possible number of degrees of freedom and other attributes while still retaining an acceptable level of dexterity. We end the paper with a suggestion on how the general approach presented herein could be extended to provide a platform for the quantification of the dexterity of anthropomorphic artificial hands.


2021 ◽  
Vol 15 (2) ◽  
pp. 139-139
Author(s):  
Naoki Asakawa

Due to changes in the global industrial structure, the number of employees in the manufacturing industry has decreased in developed countries. One of solutions to this situation offered in Industry 4.0 is “the utilization of robots and AI as alternatives to skilled workers.” This solution has been applied to various operations conventionally performed by skilled workers and has yielded consistent results. A skilled worker has two skills, namely, “physical operation skill” and “decision making skill,” which correspond to the utilization of robots and AI, respectively. Conventionally, robots have simply played back programs they were taught. However, owing to feedback technologies using force, position, or various other sensors, robots have come to be able to perform smart operations. In some of these, the capabilities of robots exceed those of human workers. For example, while humans are highly adaptive to various operations, it is difficult for them to maintain a constant force or position for long periods of time. Generally, humans make decisions about operations according to their experience, and this experience is gained from many instances of trial and error. Now, the trial-and-error learning of AI has become significantly superior to that of humans in terms of both number and speed. As a result, many systems can find operational strategies or answers much faster than humans can. This special issue features papers on robot hands, path planning, kinematics, and AI. Papers related to robot hands present an actuator using new principles, new movements, and the realization of the precise sense of the human hand. Papers related to path planning present path generation on the basis of CAD data, path generation using image processing, automatic path generation on the basis of environmental information, and the prediction of error and correction. Path generation using VR technology and error compensation using an AI technique are also presented. A paper related to kinematics presents the analysis and evaluation of a new mechanism with the aim of new applications in the field of machining. In closing, I would like to thank the authors, reviewers, and editors, without whose hard work and earnest cooperation this issue could not have been completed and presented.


2019 ◽  
Vol 5 (1) ◽  
pp. 207-210
Author(s):  
Tolgay Kara ◽  
Ahmad Soliman Masri

AbstractMillions of people around the world have lost their upper limbs mainly due to accidents and wars. Recently in the Middle East, the demand for prosthetic limbs has increased dramatically due to ongoing wars in the region. Commercially available prosthetic limbs are expensive while the most economical method available for controlling prosthetic limbs is the Electromyography (EMG). Researchers on EMG-controlled prosthetic limbs are facing several challenges, which include efficiency problems in terms of functionality especially in prosthetic hands. A major issue that needs to be solved is the fact that currently available low-cost EMG-controlled prosthetic hands cannot enable the user to grasp various types of objects in various shapes, and cannot provide the efficient use of the object by deciding the necessary hand gesture. In this paper, a computer vision-based mechanism is proposed with the purpose of detecting and recognizing objects and applying optimal hand gesture through visual feedback. The objects are classified into groups and the optimal hand gesture to grasp and use the targeted object that is most efficient for the user is implemented. A simulation model of the human hand kinematics is developed for simulation tests to reveal the efficacy of the proposed method. 80 different types of objects are detected, recognized, and classified for simulation tests, which can be realized by using two electrodes supplying the input to perform the action. Simulation results reveal the performance of proposed EMG-controlled prosthetic hand in maintaining optimal hand gestures in computer environment. Results are promising to help disabled people handle and use objects more efficiently without higher costs.


2019 ◽  
Vol 31 (1) ◽  
pp. 16-26 ◽  
Author(s):  
Haruhisa Kawasaki ◽  
◽  
Tetsuya Mouri

Humanoid robot hands are expected to replace human hands in the dexterous manipulation of objects. This paper presents a review of humanoid robot hand research and development. Humanoid hands are also applied to multifingered haptic interfaces, hand rehabilitation support systems, sEMG prosthetic hands, telepalpation systems, etc. The developed application systems in our group are briefly introduced.


Author(s):  
Edgar Simo-Serra ◽  
Francesc Moreno-Noguer ◽  
Alba Perez-Gracia

In this paper, we explore the idea of designing non-anthropomorphic multi-fingered robotic hands for tasks that replicate the motion of the human hand. Taking as input data a finite set of rigid-body positions for the five fingertips, we develop a method to perform dimensional synthesis for a kinematic chain with a tree structure, with five branches that share three common joints. We state the forward kinematics equations of relative displacements for each serial chain expressed as dual quaternions, and solve for up to five chains simultaneously to reach a number of positions along the hand trajectory. This is done using a hybrid global numerical solver that integrates a genetic algorithm and a Levenberg-Marquardt local optimizer. Although the number of candidate solutions in this problem is very high, the use of the genetic algorithm allows us to perform an exhaustive exploration of the solution space to obtain a set of solutions. We can then choose some of the solutions based on the specific task to perform. Note that these designs match the task exactly while generally having a finger design radically different from that of the human hand.


1993 ◽  
Vol 2 (3) ◽  
pp. 203-220 ◽  
Author(s):  
Robert N. Rohling ◽  
John M. Hollerbach ◽  
Stephen C. Jacobsen

An optimized fingertip mapping (OFM) algorithm has been developed to transform human hand poses into robot hand poses. It has been implemented to teleoperate the Utah/MIT Dextrous Hand by a new hand master: the Utah Dextrous Hand Master. The keystone of the algorithm is the mapping of both the human fingertip positions and orientations to the robot fingers. Robot hand poses are generated by minimizing the errors between desired human fingertip positions and orientations and possible robot fingertip positions and orientations. Differences in the fingertip workspaces that arise from kinematic dissimilarities between the human and robot hands are accounted for by the use of a priority based mapping strategy. The OFM gives first priority to the human fingertip position goals and the second to orientation.


2014 ◽  
Vol 136 (9) ◽  
Author(s):  
Lei Cui ◽  
Ugo Cupcic ◽  
Jian S. Dai

The complex kinematic structure of a human thumb makes it difficult to capture and control the thumb motions. A further complication is that mapping the fingertip position alone leads to inadequate grasping postures for current robotic hands, many of which are equipped with tactile sensors on the volar side of the fingers. This paper aimed to use a data glove as the input device to teleoperate the thumb of a humanoid robotic hand. An experiment protocol was developed with only minimum hardware involved to compensate for the differences in kinematic structures between a robotic hand and a human hand. A nonlinear constrained-optimization formulation was proposed to map and calibrate the motion of a human thumb to that of a robotic thumb by minimizing the maximum errors (minimax algorithms) of fingertip position while subject to the constraint of the normals of the surfaces of the thumb and the index fingertips within a friction cone. The proposed approach could be extended to other teleoperation applications, where the master and slave devices differ in kinematic structure.


2020 ◽  
Author(s):  
Gang Liu ◽  
Lu Wang ◽  
Jing Wang

Myoelectric prosthetic hands create the possibility for amputees to control their prosthetics like native hands. However, user acceptance of the extant myoelectric prostheses is low. Unnatural control, lack of sufficient feedback, and insufficient functionality are cited as primary reasons. Recently, although many multiple degrees-of-freedom (DOF) prosthetic hands and tactile-sensitive electronic skins have been developed, no non-invasive myoelectric interfaces can decode both forces and motions for five-fingers independently and simultaneously. This paper proposes a myoelectric interface based on energy allocation and fictitious forces hypothesis by mimicking the natural neuromuscular system. The energy-based interface uses a kind of continuous “energy mode” in the level of the entire hand. According to tasks itself, each energy mode can adaptively and simultaneously implement multiple hand motions and exerting continuous forces for a single finger. Also, a few learned energy modes could extend to the unlearned energy mode, highlighting the extensibility of this interface. We evaluate the proposed system through off-line analysis and operational experiments performed on the expression of the unlearned hand motions, the amount of finger energy, and real-time control. With active exploration, the participant was proficient at exerting just enough energy to five fingers on “fragile” or “heavy” objects independently, proportionally, and simultaneously in real-time. The main contribution of this paper is proposing the bionic energy-motion model of hand: decoding a few muscle-energy modes of the human hand (only ten modes in this paper) map massive tasks of bionic hand.


Sign in / Sign up

Export Citation Format

Share Document