Tri-Modal Tactile Display and Its Application Into Tactile Perception of Visualized Surfaces

2020 ◽  
Vol 13 (4) ◽  
pp. 733-744 ◽  
Author(s):  
Guohong Liu ◽  
Chen Zhang ◽  
Xiaoying Sun
Author(s):  
Kylie Gomes ◽  
Scott Betza ◽  
Sara Lu Riggs

Objective To evaluate the effects that movement, cue complexity, and the location of tactile displays on the body have on tactile change detection. Background Tactile displays have been demonstrated as a means to address data overload by offloading the visual and auditory modalities. However, change blindness—the failure to detect changes in a stimulus when changes coincide with another event or disruption in stimulus continuity—has been demonstrated to affect the tactile modality and may be exacerbated during movement. The complexity of tactile cues and locations of tactile displays on the body may also affect the detection of changes in tactile patterns. Limitations to tactile perception need to be examined. Method Twenty-four participants performed a tactile change detection task while sitting, standing, and walking. Tactile cues varied in complexity and included low, medium, and high complexity cues presented to the arm or back. Results Movement adversely affects tactile change detection as hit rates were the highest while sitting, followed by standing and walking. Cue complexity affected tactile change detection: Low complexity cues resulted in higher detection rates compared with medium and high complexity cues. The arms exhibited better change detection performance than the back. Conclusion The design of tactile displays should consider the effect of movement. Cue complexity should be minimized and decisions about the location of a tactile display should take into account body movements to support tactile perception. Application The findings can provide design guidelines to inform tactile display design for data-rich, complex domains.


2005 ◽  
Vol 38 (1) ◽  
pp. 260-265
Author(s):  
Myoung-Jong Yoon ◽  
Kee-Ho Yu ◽  
Tae-Kyu Kwon ◽  
Nam-Gyun Kim

Author(s):  
Zoltán Szabó ◽  
Eniko T. Enikov

With the emergence of augmented and virtual-reality based information delivery technologies the gap between availability of communication devices for visually impaired people and sighted people is emerging. The current study describes a communication tool which provides a reading platform for visually impaired people by means of a haptic display. In this paper the development and human subject study based evaluation of an electromagnetic microactuator-array based virtual tactile display is presented. The actuator array is comprised of a 4 by 5 array of micro voice-coil actuators (tactors) providing vibrotactile stimulation on the user’s fingertip. The size and performance of the actuators is evaluated against the thresholds of human tactile perception. It is demonstrated that a 2.65 mm (diameter) × 4 mm (height) generic tactor is suitable for practical applications in dynamic tactile displays. The maximum force of the actuator was 30 mN generated at current levels of 200 mA. At a stroke of 4.5 mm, the force is reduced to 10 mN. The peak force was generated at a displacement of 1.5 mm. A total of 10 alpha-numeric symbols were displayed to the users via dynamically changing the location of the vibrating point in a predefined sequence, thus creating a tactile perception of continuous curve. Users were asked to sketch out the perceived symbols. Each subject carried out three experiments. The first experiment exposed all subjects to ten different characters. Data obtained from human subject tests suggest that users perceive most shapes accurately, however the existence of jump discontinuities in the flow of presentation of the curves lowers recognition efficiency most likely due to loss of sensation of solid reference point. Characters containing two or more discontinuous lines such as ‘X’ were more difficult to recognize in comparison to those described with a single line such as ‘P’, or ‘Z’. Analysis of the average character recognition rate from 10 volunteers concluded that any presented character was identified correctly in 7 out 10 tests. The second test included characters that were reused from the first experiment. Users had improved their character recognition performance as a consequence of repeated exposure and learning. A final set of experiments concluded that recognition of groups of characters, forming words, is the least efficient and requires further perfecting. Recommendations for improvements of the recognition rate are also included.


2015 ◽  
Vol 2015.7 (0) ◽  
pp. _30am2-PN--_30am2-PN-
Author(s):  
Toshiyuki Wada ◽  
Kenjiro Takemura ◽  
Takashi Maeno ◽  

2020 ◽  
Author(s):  
Xiaoying Sun ◽  
Chen Zhang ◽  
Guohong Liu

Abstract At present, the tactile perception of 3D geometric bumps (such as sinusoidal bumps, Gaussian bumps, triangular bumps, etc.) on touchscreens is mainly realized by mapping the local gradients of rendered virtual surfaces to lateral electrostatic friction, while maintaining the constant normal feedback force. The latest study has shown that the recognition rate of 3D visual objects with electrovibration is lower by 27$\%$ than that using force-feedback devices. Based on the custom-designed tactile display coupling with electrovibration and mechanical vibration stimuli, this paper proposes a novel tactile rendering algorithm of 3D geometric bumps, which simultaneously generates the lateral and the normal perceptual dimensions. Specifically, a mapping relationship with the electrostatic friction proportional to the gradient of 3D geometric bumps is firstly established. Then, resorting to the angle between the lateral friction force and the normal feedback force, a rendering model of the normal feedback force using mechanical vibration is further determined. Compared to the previous works with electrovibration, objective evaluations with 12 participants showed that the novel version significantly improved recognition rates of 3D bumps on touchscreens.


2007 ◽  
Author(s):  
Tony Ro ◽  
Johanan Hsu ◽  
Nafi Yasar ◽  
L. Caitlin Ellmore ◽  
Michael Beauchamp
Keyword(s):  

Author(s):  
Sandra Regina Marchi ◽  
Maria Lucia Okimoto ◽  
ALESSANDRO MARQUES ◽  
Ramón Sigifredo Cortés Paredes ◽  
Rafael Lima Vieira

2020 ◽  
Vol 11 ◽  
Author(s):  
Chao Huang ◽  
Qizhuo Wang ◽  
Mingfu Zhao ◽  
Chunyan Chen ◽  
Sinuo Pan ◽  
...  

Minimally invasive surgery (MIS) has been the preferred surgery approach owing to its advantages over conventional open surgery. As a major limitation, the lack of tactile perception impairs the ability of surgeons in tissue distinction and maneuvers. Many studies have been reported on industrial robots to perceive various tactile information. However, only force data are widely used to restore part of the surgeon’s sense of touch in MIS. In recent years, inspired by image classification technologies in computer vision, tactile data are represented as images, where a tactile element is treated as an image pixel. Processing raw data or features extracted from tactile images with artificial intelligence (AI) methods, including clustering, support vector machine (SVM), and deep learning, has been proven as effective methods in industrial robotic tactile perception tasks. This holds great promise for utilizing more tactile information in MIS. This review aims to provide potential tactile perception methods for MIS by reviewing literatures on tactile sensing in MIS and literatures on industrial robotic tactile perception technologies, especially AI methods on tactile images.


Sign in / Sign up

Export Citation Format

Share Document