Improvement Method of Tactile Stimuli Information Presentation by Neural Networks

2008 ◽  
Vol 128 (12) ◽  
pp. 1861-1862
Author(s):  
Young-il Park ◽  
Tota Mizuno ◽  
Masafumi Uchida
Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 113
Author(s):  
Ghazal Rouhafzay ◽  
Ana-Maria Cretu ◽  
Pierre Payeur

Transfer of learning or leveraging a pre-trained network and fine-tuning it to perform new tasks has been successfully applied in a variety of machine intelligence fields, including computer vision, natural language processing and audio/speech recognition. Drawing inspiration from neuroscience research that suggests that both visual and tactile stimuli rouse similar neural networks in the human brain, in this work, we explore the idea of transferring learning from vision to touch in the context of 3D object recognition. In particular, deep convolutional neural networks (CNN) pre-trained on visual images are adapted and evaluated for the classification of tactile data sets. To do so, we ran experiments with five different pre-trained CNN architectures and on five different datasets acquired with different technologies of tactile sensors including BathTip, Gelsight, force-sensing resistor (FSR) array, a high-resolution virtual FSR sensor, and tactile sensors on the Barrett robotic hand. The results obtained confirm the transferability of learning from vision to touch to interpret 3D models. Due to its higher resolution, tactile data from optical tactile sensors was demonstrated to achieve higher classification rates based on visual features compared to other technologies relying on pressure measurements. Further analysis of the weight updates in the convolutional layer is performed to measure the similarity between visual and tactile features for each technology of tactile sensing. Comparing the weight updates in different convolutional layers suggests that by updating a few convolutional layers of a pre-trained CNN on visual data, it can be efficiently used to classify tactile data. Accordingly, we propose a hybrid architecture performing both visual and tactile 3D object recognition with a MobileNetV2 backbone. MobileNetV2 is chosen due to its smaller size and thus its capability to be implemented on mobile devices, such that the network can classify both visual and tactile data. An accuracy of 100% for visual and 77.63% for tactile data are achieved by the proposed architecture.


2002 ◽  
Vol 13 (4) ◽  
pp. 350-355 ◽  
Author(s):  
Angelo Maravita ◽  
Charles Spence ◽  
Claire Sergent ◽  
Jon Driver

In mirror reflections, visual stimuli in near peripersonal space (e.g., an object in the hand) can project the retinal image of far, extrapersonal stimuli “beyond” the mirror. We studied the interaction of such visual reflections with tactile stimuli in a cross-modal congruency task. We found that visual distractors produce stronger interference on tactile judgments when placed close to the stimulated hand, but observed indirectly as distant mirror reflections, than when directly observed in equivalently distant far space, even when in contact with a dummy hand or someone else's hand in the far location. The stronger visual-tactile interference for the mirror condition implies that near stimuli seen as distant reflections in a mirror view of one's own hands can activate neural networks coding peripersonal space, because these visual stimuli are coded as having a true source near to the body.


1999 ◽  
Vol 22 (8) ◽  
pp. 723-728 ◽  
Author(s):  
Artymiak ◽  
Bukowski ◽  
Feliks ◽  
Narberhaus ◽  
Zenner

1995 ◽  
Vol 40 (11) ◽  
pp. 1110-1110
Author(s):  
Stephen James Thomas

1974 ◽  
Author(s):  
Robert E. Fenton ◽  
Richard D. Gilson ◽  
Ronald W. Ventola

Sign in / Sign up

Export Citation Format

Share Document