Tracking and replication of hand movements by teleguided intelligent manipulator robot

Robotica ◽  
2014 ◽  
Vol 33 (1) ◽  
pp. 141-156 ◽  
Author(s):  
A. T. Hussain ◽  
S. Faiz Ahmed ◽  
D. Hazry

SUMMARYIn this paper, a new method is presented that allows an intelligent manipulator robotic system to track a human hand from far distance in 3D space and estimate its orientation and position in real time with the goal of ultimately using the algorithm with a robotic spherical wrist system. In this proposed algorithm, several image processing and morphology techniques are used in conjunction with various mathematical formulas to calculate the hand position and orientation. The proposed technique was tested on Remote teleguided virtual Robotic system. Experimental results show that proposed method is a robust technique in terms of the required processing time of estimation of orientation and position of hand.

2021 ◽  
Vol 3 (1) ◽  
pp. 99-120
Author(s):  
Zainab Al-Qurashi ◽  
Brian D. Ziebart

To perform many critical manipulation tasks successfully, human-robot mimicking systems should not only accurately copy the position of a human hand, but its orientation as well. Deep learning methods trained from pairs of corresponding human and robot poses offer one promising approach for constructing a human-robot mapping to accomplish this. However, ignoring the spatial and temporal structure of this mapping makes learning it less effective. We propose two different hierarchical architectures that leverage the structural and temporal human-robot mapping. We partially separate the robotic manipulator's end-effector position and orientation while considering the mutual coupling effects between them. This divides the main problem---making the robot match the human's hand position and mimic its orientation accurately along an unknown trajectory---into several sub-problems. We address these using different recurrent neural networks (RNNs) with Long-Short Term Memory (LSTM) that we combine and train hierarchically based on the coupling over the aspects of the robot that each controls. We evaluate our proposed architectures using a virtual reality system to track human table tennis motions and compare with single artificial neural network (ANN) and RNN models. We compare the benefits of using deep learning neural networks with and without our architectures and find smaller errors in position and orientation, along with increased flexibility in wrist movement are obtained by our proposed architectures. Also, we propose a hybrid approach to collect the training dataset. The hybrid training dataset is collected by two approaches when the robot mimics human motions (standard learn from demonstrator LfD) and when the human mimics robot motions (LfDr). We evaluate the hybrid training dataset and show that the performance of the machine learning system trained by the hybrid training dataset is better with less error and faster training time compared to using the collected dataset using standard LfD approach.


Author(s):  
Paulius Sakalys ◽  
Loreta Savulioniene ◽  
Dainius Savulionis

The aim of the research is to determine and evaluate the repeatability of the robotic system by repeating the movements of the human hand, to identify the displacement using digital infrared projection equipment, skeletal methods and depth cameras. The article reviews and selects possible skeletal methods, motion recognition algorithms, reviews and substantiates the physical equipment selected for the technical stage of the experiment. The plan of experimental research stages, research stand, systematized research results, conclusions and usability suggestions are described.


Author(s):  
Xiaolu Zeng ◽  
Alan Hedge ◽  
Francois Guimbretiere
Keyword(s):  

Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 137
Author(s):  
Larisa Dunai ◽  
Martin Novak ◽  
Carmen García Espert

The present paper describes the development of a prosthetic hand based on human hand anatomy. The hand phalanges are printed with 3D printing with Polylactic Acid material. One of the main contributions is the investigation on the prosthetic hand joins; the proposed design enables one to create personalized joins that provide the prosthetic hand a high level of movement by increasing the degrees of freedom of the fingers. Moreover, the driven wire tendons show a progressive grasping movement, being the friction of the tendons with the phalanges very low. Another important point is the use of force sensitive resistors (FSR) for simulating the hand touch pressure. These are used for the grasping stop simulating touch pressure of the fingers. Surface Electromyogram (EMG) sensors allow the user to control the prosthetic hand-grasping start. Their use may provide the prosthetic hand the possibility of the classification of the hand movements. The practical results included in the paper prove the importance of the soft joins for the object manipulation and to get adapted to the object surface. Finally, the force sensitive sensors allow the prosthesis to actuate more naturally by adding conditions and classifications to the Electromyogram sensor.


2018 ◽  
Author(s):  
Ahmed A. Mostafa ◽  
Bernard Marius ’t Hart ◽  
Denise Y.P. Henriques

AbstractAn accurate estimate of limb position is necessary for movement planning, before and after motor learning. Where we localize our unseen hand after a reach depends on felt hand position, or proprioception, but in studies and theories on motor adaptation this is quite often neglected in favour of predicted sensory consequences based on efference copies of motor commands. Both sources of information should contribute, so here we set out to further investigate how much of hand localization depends on proprioception and how much on predicted sensory consequences. We use a training paradigm combining robot controlled hand movements with rotated visual feedback that eliminates the possibility to update predicted sensory consequences (‘exposure training’), but still recalibrates proprioception, as well as a classic training paradigm with self-generated movements in another set of participants. After each kind of training we measure participants’ hand location estimates based on both efference-based predictions and afferent proprioceptive signals with self-generated hand movements (‘active localization’) as well as based on proprioception only with robot-generated movements (‘passive localization’). In the exposure training group, we find indistinguishable shifts in passive and active hand localization, but after classic training, active localization shifts more than passive, indicating a contribution from updated predicted sensory consequences. Both changes in open-loop reaches and hand localization are only slightly smaller after exposure training as compared to after classic training, confirming that proprioception plays a large role in estimating limb position and in planning movements, even after adaptation. (data: https://doi.org/10.17605/osf.io/zfdth, preprint: https://doi.org/10.1101/384941)


Author(s):  
Patricio Rivera ◽  
Edwin Valarezo ◽  
Tae-Seong Kim

Recognition of hand activities of daily living (hand-ADL) is useful in the areas of human–computer interactions, lifelogging, and healthcare applications. However, developing a reliable human activity recognition (HAR) system for hand-ADL with only a single wearable sensor is still a challenge due to hand movements that are typically transient and sporadic. Approaches based on deep learning methodologies to reduce noise and extract relevant features directly from raw data are becoming more promising for implementing such HAR systems. In this work, we present an ARMA-based deep autoencoder and a deep recurrent network (RNN) using Gated Recurrent Unit (GRU) for recognition of hand-ADL using signals from a single IMU wearable sensor. The integrated ARMA-based autoencoder denoises raw time-series signals of hand activities, such that better representation of human hand activities can be made. Then, our deep RNN-GRU recognizes seven hand-ADL based upon the output of the autoencoder: namely, Open Door, Close Door, Open Refrigerator, Close Refrigerator, Open Drawer, Close Drawer, and Drink from Cup. The proposed methodology using RNN-GRU with autoencoder achieves a mean accuracy of 84.94% and F1-score of 83.05% outperforming conventional classifiers such as RNN-LSTM, BRNN-LSTM, CNN, and Hybrid-RNNs by 4–10% higher in both accuracy and F1-score. The experimental results also showed the use of the autoencoder improves both the accuracy and F1-score of each conventional classifier by 12.8% in RNN-LSTM, 4.37% in BRNN-LSTM, 15.45% CNN, 14.6% Hybrid RNN, and 12.4% for the proposed RNN-GRU.


Author(s):  
C. Raoufi ◽  
A. A. Goldenberg ◽  
W. Kucharczyk ◽  
H. Hadian

In this paper, the inverse kinematic and control paradigm of a novel tele-robotic system for MRI-guided interventions for closed-bore MRI-guided brain biopsy is presented. Other candidate neurosurgical procedures enabled by this system would include thermal ablation, radiofrequency ablation, deep brain stimulators, and targeted drug delivery. The control architecture is also reported. The design paradigm is fundamentally based on a modular design configuration of the slave manipulator that is performing tasks inside MR scanner. The tele-robotic system is a master-slave system. The master manipulator consists of three units including: (i) the navigation module; (ii) the biopsy module; and (iii) the surgical arm. Navigation and biopsy modules were designed to undertake the alignment and advancement of the surgical needle respectively. The biopsy needle is held and advanced by the biopsy module. The biopsy module is attached to the navigation module. All three units are held by a surgical arm. The main challenge in the control of the biopsy needle using the proposed navigation module is to adjust a surgical tool from its initial position and orientation to a final position and orientation. In a typical brain biopsy operation, the desired task is to align the biopsy needle with a target knowing the positions of both the target in the patient’s skull and the entry point on the surface of the skull. In this paper, the mechanical design, control paradigms, and inverse kinematics model of the robot are reported.


Author(s):  
Y. Tanaka ◽  
H. Tsubota ◽  
Y. Takeda ◽  
T. Tsuji
Keyword(s):  

2018 ◽  
Vol 22 ◽  
pp. 519-526
Author(s):  
Catalin Constantin Moldovan ◽  
Ionel Staretu
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document