scholarly journals Adaptive Learning of Hand Movement in Human Demonstration for Robot Action

2017 ◽  
Vol 29 (5) ◽  
pp. 919-927 ◽  
Author(s):  
Ngoc Hung Pham ◽  
◽  
Takashi Yoshimi

This paper describes a process for adaptive learning of hand movements in human demonstration for manipulation actions by robots using Dynamic Movement Primitives (DMPs) framework. The process includes 1) tracking hand movement from human demonstration, 2) segmenting hand movement, 3) adaptive learning with DMPs framework. We implement a extended DMPs model with a modified formulation for hand movement data observed from human demonstration including hand 3D position, orientation and fingers distance. We evaluate the generated movements by DMPs model which is reproduced without changes or adapted to change of goal of the movement. The adapted movement data is used to control a robot arm by spatial position and orientation of its end-effector with a parallel gripper.

Author(s):  
Ghananeel Rotithor ◽  
Ashwin P. Dani

Abstract Combining perception feedback control with learning-based open-loop motion generation for the robot’s end-effector control is an attractive solution for many robotic manufacturing tasks. For instance, while performing a peg-in-the-hole or an insertion task when the hole or the recipient part is not visible in the eye-in-the-hand camera, an open-loop learning-based motion primitive method can be used to generate end-effector path. Once the recipient part is in the field of view (FOV), visual servo control can be used to control the motion of the robot. Inspired by such applications, this paper presents a control scheme that switches between Dynamic Movement Primitives (DMPs) and Image-based Visual Servo (IBVS) control combining end-effector control with perception-based feedback control. A simulation result is performed that switches the controller between DMP and IBVS to verify the performance of the proposed control methodology.


2018 ◽  
Vol 11 (6) ◽  
Author(s):  
Damla Topalli ◽  
Nergiz Ercil Cagiltay

Endoscopic surgery procedures require specific skills, such as eye-hand coordination to be developed. Current education programs are facing with problems to provide appropriate skill improvement and assessment methods in this field. This study aims to propose objective metrics for hand-movement skills and assess eye-hand coordination. An experimental study is conducted with 15 surgical residents to test the newly proposed measures. Two computer-based both-handed endoscopic surgery practice scenarios are developed in a simulation environment to gather the participants’ eye-gaze data with the help of an eye tracker as well as the related hand movement data through haptic interfaces. Additionally, participants’ eye-hand coordination skills are analyzed. The results indicate higher correlations in the intermediates’ eye-hand movements compared to the novices. An increase in intermediates’ visual concentration leads to smoother hand movements. Similarly, the novices’ hand movements are shown to remain at a standstill. After the first round of practice, all participants’ eye-hand coordination skills are improved on the specific task targeted in this study. According to these results, it can be concluded that the proposed metrics can potentially provide some additional insights about trainees’ eye-hand coordination skills and help instructional system designers to better address training requirements.


Author(s):  
Weiyong Si ◽  
Ning Wang ◽  
Chenguang Yang

AbstractIn this paper, composite dynamic movement primitives (DMPs) based on radial basis function neural networks (RBFNNs) are investigated for robots’ skill learning from human demonstrations. The composite DMPs could encode the position and orientation manipulation skills simultaneously for human-to-robot skills transfer. As the robot manipulator is expected to perform tasks in unstructured and uncertain environments, it requires the manipulator to own the adaptive ability to adjust its behaviours to new situations and environments. Since the DMPs can adapt to uncertainties and perturbation, and spatial and temporal scaling, it has been successfully employed for various tasks, such as trajectory planning and obstacle avoidance. However, the existing skill model mainly focuses on position or orientation modelling separately; it is a common constraint in terms of position and orientation simultaneously in practice. Besides, the generalisation of the skill learning model based on DMPs is still hard to deal with dynamic tasks, e.g., reaching a moving target and obstacle avoidance. In this paper, we proposed a composite DMPs-based framework representing position and orientation simultaneously for robot skill acquisition and the neural networks technique is used to train the skill model. The effectiveness of the proposed approach is validated by simulation and experiments.


2011 ◽  
Vol 59 (11) ◽  
pp. 910-922 ◽  
Author(s):  
Minija Tamosiunaite ◽  
Bojan Nemec ◽  
Aleš Ude ◽  
Florentin Wörgötter

2021 ◽  
Author(s):  
Zhiwei Liao ◽  
Fei Zhao ◽  
Gedong Jiang ◽  
Xuesong Mei

Abstract Dynamic Movement Primitives (DMPs) as a robust and efficient framework has been studied widely for robot learning from demonstration. Classical DMPs framework mainly focuses on the movement learning in Cartesian or joint space, and can't properly represent end-effector orientation. In this paper, we present an Extended DMPs framework (EDMPs) both in Cartesian space and Riemannian manifolds for Quaternion-based orientations learning and generalization. Gaussian Mixture Model and Gaussian Mixture Regression are adopted as the initialization phase of EDMPs to handle multi-demonstrations and obtain their mean and covariance. Additionally, some evaluation indicators including reachability and similarity are defined to characterize the learning and generalization abilities of EDMPs. Finally, the quaternion-based orientations are successfully transferred from human to the robot, and a real-world experiment is conducted to verify the effectiveness of the proposed method. The experimental results reveal that the presented approach can learn and generalize multi-space parameters under multi-demonstrations.


2021 ◽  
Vol 11 (23) ◽  
pp. 11184
Author(s):  
Ang Li ◽  
Zhenze Liu ◽  
Wenrui Wang ◽  
Mingchao Zhu ◽  
Yanhui Li ◽  
...  

Dynamic movement primitives (DMPs) are a robust framework for movement generation from demonstrations. This framework can be extended by adding a perturbing term to achieve obstacle avoidance without sacrificing stability. The additional term is usually constructed based on potential functions. Although different potentials are adopted to improve the performance of obstacle avoidance, the profiles of potentials are rarely incorporated into reinforcement learning (RL) framework. In this contribution, we present a RL based method to learn not only the profiles of potentials but also the shape parameters of a motion. The algorithm employed is PI2 (Policy Improvement with Path Integrals), a model-free, sampling-based learning method. By using the PI2, the profiles of potentials and the parameters of the DMPs are learned simultaneously; therefore, we can optimize obstacle avoidance while completing specified tasks. We validate the presented method in simulations and with a redundant robot arm in experiments.


2021 ◽  
Vol 7 (2) ◽  
pp. 15
Author(s):  
Tomohiro Shimizu ◽  
Ryo Hachiuma ◽  
Hiroki Kajita ◽  
Yoshifumi Takatsume ◽  
Hideo Saito

Detecting surgical tools is an essential task for the analysis and evaluation of surgical videos. However, in open surgery such as plastic surgery, it is difficult to detect them because there are surgical tools with similar shapes, such as scissors and needle holders. Unlike endoscopic surgery, the tips of the tools are often hidden in the operating field and are not captured clearly due to low camera resolution, whereas the movements of the tools and hands can be captured. As a result that the different uses of each tool require different hand movements, it is possible to use hand movement data to classify the two types of tools. We combined three modules for localization, selection, and classification, for the detection of the two tools. In the localization module, we employed the Faster R-CNN to detect surgical tools and target hands, and in the classification module, we extracted hand movement information by combining ResNet-18 and LSTM to classify two tools. We created a dataset in which seven different types of open surgery were recorded, and we provided the annotation of surgical tool detection. Our experiments show that our approach successfully detected the two different tools and outperformed the two baseline methods.


2020 ◽  
Vol 132 (5) ◽  
pp. 1358-1366
Author(s):  
Chao-Hung Kuo ◽  
Timothy M. Blakely ◽  
Jeremiah D. Wander ◽  
Devapratim Sarma ◽  
Jing Wu ◽  
...  

OBJECTIVEThe activation of the sensorimotor cortex as measured by electrocorticographic (ECoG) signals has been correlated with contralateral hand movements in humans, as precisely as the level of individual digits. However, the relationship between individual and multiple synergistic finger movements and the neural signal as detected by ECoG has not been fully explored. The authors used intraoperative high-resolution micro-ECoG (µECoG) on the sensorimotor cortex to link neural signals to finger movements across several context-specific motor tasks.METHODSThree neurosurgical patients with cortical lesions over eloquent regions participated. During awake craniotomy, a sensorimotor cortex area of hand movement was localized by high-frequency responses measured by an 8 × 8 µECoG grid of 3-mm interelectrode spacing. Patients performed a flexion movement of the thumb or index finger, or a pinch movement of both, based on a visual cue. High-gamma (HG; 70–230 Hz) filtered µECoG was used to identify dominant electrodes associated with thumb and index movement. Hand movements were recorded by a dataglove simultaneously with µECoG recording.RESULTSIn all 3 patients, the electrodes controlling thumb and index finger movements were identifiable approximately 3–6-mm apart by the HG-filtered µECoG signal. For HG power of cortical activation measured with µECoG, the thumb and index signals in the pinch movement were similar to those observed during thumb-only and index-only movement, respectively (all p > 0.05). Index finger movements, measured by the dataglove joint angles, were similar in both the index-only and pinch movements (p > 0.05). However, despite similar activation across the conditions, markedly decreased thumb movement was observed in pinch relative to independent thumb-only movement (all p < 0.05).CONCLUSIONSHG-filtered µECoG signals effectively identify dominant regions associated with thumb and index finger movement. For pinch, the µECoG signal comprises a combination of the signals from individual thumb and index movements. However, while the relationship between the index finger joint angle and HG-filtered signal remains consistent between conditions, there is not a fixed relationship for thumb movement. Although the HG-filtered µECoG signal is similar in both thumb-only and pinch conditions, the actual thumb movement is markedly smaller in the pinch condition than in the thumb-only condition. This implies a nonlinear relationship between the cortical signal and the motor output for some, but importantly not all, movement types. This analysis provides insight into the tuning of the motor cortex toward specific types of motor behaviors.


Sign in / Sign up

Export Citation Format

Share Document