Human—Hand Posture Classification For Robotic Teleoperation Using Wearable Sensor

Author(s):  
Pratyush Pratiim Devnath ◽  
Ananda Sankar Kundu ◽  
Oishee Mazumder ◽  
Subhasis Bhaumik
Actuators ◽  
2022 ◽  
Vol 11 (1) ◽  
pp. 24
Author(s):  
Guan-Yang Liu ◽  
Yi Wang ◽  
Chao Huang ◽  
Chen Guan ◽  
Dong-Tao Ma ◽  
...  

The goal of haptic feedback in robotic teleoperation is to enable users to accurately feel the interaction force measured at the slave side and precisely understand what is happening in the slave environment. The accuracy of the feedback force describing the error between the actual feedback force felt by a user at the master side and the measured interaction force at the slave side is the key performance indicator for haptic display in robotic teleoperation. In this paper, we evaluate the haptic feedback accuracy in robotic teleoperation via experimental method. A special interface iHandle and two haptic devices, iGrasp-T and iGrasp-R, designed for robotic teleoperation are developed for experimental evaluation. The device iHandle integrates a high-performance force sensor and a micro attitude and heading reference system which can be used to identify human upper limb motor abilities, such as posture maintenance and force application. When a user is asked to grasp the iHandle and maintain a fixed position and posture, the fluctuation value of hand posture is measured to be between 2 and 8 degrees. Based on the experimental results, human hand tremble as input noise sensed by the haptic device is found to be a major reason that results in the noise of output force from haptic device if the spring-damping model is used to render feedback force. Therefore, haptic rendering algorithms should be independent of hand motion information to avoid input noise from human hand to the haptic control loop in teleoperation. Moreover, the iHandle can be fixed at the end effector of haptic devices; iGrasp-T or iGrasp-R, to measure the output force/torque from iGrasp-T or iGrasp-Rand to the user. Experimental results show that the accuracy of the output force from haptic device iGrasp-T is approximately 0.92 N, and using the force sensor in the iHandle can compensate for the output force inaccuracy of device iGrasp-T to 0.1 N. Using a force sensor as the feedback link to form a closed-loop feedback force control system is an effective way to improve the accuracy of feedback force and guarantee high-fidelity of feedback forces at the master side in robotic teleoperation.


Author(s):  
Patricio Rivera ◽  
Edwin Valarezo ◽  
Tae-Seong Kim

Recognition of hand activities of daily living (hand-ADL) is useful in the areas of human–computer interactions, lifelogging, and healthcare applications. However, developing a reliable human activity recognition (HAR) system for hand-ADL with only a single wearable sensor is still a challenge due to hand movements that are typically transient and sporadic. Approaches based on deep learning methodologies to reduce noise and extract relevant features directly from raw data are becoming more promising for implementing such HAR systems. In this work, we present an ARMA-based deep autoencoder and a deep recurrent network (RNN) using Gated Recurrent Unit (GRU) for recognition of hand-ADL using signals from a single IMU wearable sensor. The integrated ARMA-based autoencoder denoises raw time-series signals of hand activities, such that better representation of human hand activities can be made. Then, our deep RNN-GRU recognizes seven hand-ADL based upon the output of the autoencoder: namely, Open Door, Close Door, Open Refrigerator, Close Refrigerator, Open Drawer, Close Drawer, and Drink from Cup. The proposed methodology using RNN-GRU with autoencoder achieves a mean accuracy of 84.94% and F1-score of 83.05% outperforming conventional classifiers such as RNN-LSTM, BRNN-LSTM, CNN, and Hybrid-RNNs by 4–10% higher in both accuracy and F1-score. The experimental results also showed the use of the autoencoder improves both the accuracy and F1-score of each conventional classifier by 12.8% in RNN-LSTM, 4.37% in BRNN-LSTM, 15.45% CNN, 14.6% Hybrid RNN, and 12.4% for the proposed RNN-GRU.


2017 ◽  
Vol 65 ◽  
pp. 1-10 ◽  
Author(s):  
Weizhi Nai ◽  
Yue Liu ◽  
David Rempel ◽  
Yongtian Wang

Sign in / Sign up

Export Citation Format

Share Document