Tactile Grasp Stability Classification Based on Graph Convolutional Networks

Author(s):  
Tingting Mi ◽  
Dashun Que ◽  
Senlin Fang ◽  
Zhenning Zhou ◽  
Chaoxiang Ye ◽  
...  
2000 ◽  
Vol 84 (6) ◽  
pp. 2984-2997 ◽  
Author(s):  
Per Jenmalm ◽  
Seth Dahlstedt ◽  
Roland S. Johansson

Most objects that we manipulate have curved surfaces. We have analyzed how subjects during a prototypical manipulatory task use visual and tactile sensory information for adapting fingertip actions to changes in object curvature. Subjects grasped an elongated object at one end using a precision grip and lifted it while instructed to keep it level. The principal load of the grasp was tangential torque due to the location of the center of mass of the object in relation to the horizontal grip axis joining the centers of the opposing grasp surfaces. The curvature strongly influenced the grip forces required to prevent rotational slips. Likewise the curvature influenced the rotational yield of the grasp that developed under the tangential torque load due to the viscoelastic properties of the fingertip pulps. Subjects scaled the grip forces parametrically with object curvature for grasp stability. Moreover in a curvature-dependent manner, subjects twisted the grasp around the grip axis by a radial flexion of the wrist to keep the desired object orientation despite the rotational yield. To adapt these fingertip actions to object curvature, subjects could use both vision and tactile sensibility integrated with predictive control. During combined blindfolding and digital anesthesia, however, the motor output failed to predict the consequences of the prevailing curvature. Subjects used vision to identify the curvature for efficient feedforward retrieval of grip force requirements before executing the motor commands. Digital anesthesia caused little impairment of grip force control when subjects had vision available, but the adaptation of the twist became delayed. Visual cues about the form of the grasp surface obtained before contact was used to scale the grip force, whereas the scaling of the twist depended on visual cues related to object movement. Thus subjects apparently relied on different visuomotor mechanisms for adaptation of grip force and grasp kinematics. In contrast, blindfolded subjects used tactile cues about the prevailing curvature obtained after contact with the object for feedforward adaptation of both grip force and twist. We conclude that humans use both vision and tactile sensibility for feedforward parametric adaptation of grip forces and grasp kinematics to object curvature. Normal control of the twist action, however, requires digital afferent input, and different visuomotor mechanisms support the control of the grasp twist and the grip force. This differential use of vision may have a bearing to the two-stream model of human visual processing.


Author(s):  
Hao Chen ◽  
Yue Xu ◽  
Feiran Huang ◽  
Zengde Deng ◽  
Wenbing Huang ◽  
...  

2021 ◽  
Vol 11 (15) ◽  
pp. 6975
Author(s):  
Tao Zhang ◽  
Lun He ◽  
Xudong Li ◽  
Guoqing Feng

Lipreading aims to recognize sentences being spoken by a talking face. In recent years, the lipreading method has achieved a high level of accuracy on large datasets and made breakthrough progress. However, lipreading is still far from being solved, and existing methods tend to have high error rates on the wild data and have the defects of disappearing training gradient and slow convergence. To overcome these problems, we proposed an efficient end-to-end sentence-level lipreading model, using an encoder based on a 3D convolutional network, ResNet50, Temporal Convolutional Network (TCN), and a CTC objective function as the decoder. More importantly, the proposed architecture incorporates TCN as a feature learner to decode feature. It can partly eliminate the defects of RNN (LSTM, GRU) gradient disappearance and insufficient performance, and this yields notable performance improvement as well as faster convergence. Experiments show that the training and convergence speed are 50% faster than the state-of-the-art method, and improved accuracy by 2.4% on the GRID dataset.


Sign in / Sign up

Export Citation Format

Share Document