Creating multi-touch haptic feedback on an electrostatic tactile display

Author(s):  
Gholamreza Ilkhani ◽  
Evren Samur
Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4780
Author(s):  
Oliver Ozioko ◽  
William Navaraj ◽  
Marion Hersh ◽  
Ravinder Dahiya

This paper presents a dual-function wearable device (Tacsac) with capacitive tactile sensing and integrated tactile feedback capability to enable communication among deafblind people. Tacsac has a skin contactor which enhances localized vibrotactile stimulation of the skin as a means of feedback to the user. It comprises two main modules—the touch-sensing module and the vibrotactile module; both stacked and integrated as a single device. The vibrotactile module is an electromagnetic actuator that employs a flexible coil and a permanent magnet assembled in soft poly (dimethylsiloxane) (PDMS), while the touch-sensing module is a planar capacitive metal-insulator-metal (MIM) structure. The flexible coil was fabricated on a 50 µm polyimide (PI) sheet using Lithographie Galvanoformung Abformung (LIGA) micromoulding technique. The Tacsac device has been tested for independent sensing and actuation as well as dual sensing-actuation mode. The measured vibration profiles of the actuator showed a synchronous response to external stimulus for a wide range of frequencies (10 Hz to 200 Hz) within the perceivable tactile frequency thresholds of the human hand. The resonance vibration frequency of the actuator is in the range of 60–70 Hz with an observed maximum off-plane displacement of 0.377 mm at coil current of 180 mA. The capacitive touch-sensitive layer was able to respond to touch with minimal noise both when actuator vibration is ON and OFF. A mobile application was also developed to demonstrate the application of Tacsac for communication between deafblind person wearing the device and a mobile phone user who is not deafblind. This advances existing tactile displays by providing efficient two-way communication through the use of a single device for both localized haptic feedback and touch-sensing.


2012 ◽  
Vol 21 (4) ◽  
pp. 435-451 ◽  
Author(s):  
Laura Santos-Carreras ◽  
Kaspar Leuenberger ◽  
Evren Samur ◽  
Roger Gassert ◽  
Hannes Bleuler

Robotic surgery provides many benefits such as reduced invasiveness and increased dexterity. This comes at the cost of no direct contact between surgeon and patient. This physical separation prevents surgeons from performing direct haptic exploration of tissues and organs, imposing exclusive reliance on visual cues. Current technology is not yet able to both measure and reproduce a realistic and complete sense of touch (interaction force, temperature, roughness, etc.). In this paper, we put forward a concept based on multimodal feedback consisting of the integration of different kinds of visual and tactile cues with force feedback that can potentially improve both the surgeon's performance and the patient's safety. We present a cost-effective tactile display simulating a pulsating artery that has been integrated into a haptic workstation to combine both tactile and force-feedback information. Furthermore, we investigate the effect of different feedback types, including tactile and/or visual cues, on the performance of subjects carrying out two typical palpation tasks: (1) exploring a tissue to find a hidden artery and (2) identifying the orientation of a hidden artery. The results show that adding tactile feedback significantly reduces task completion time. Moreover, for high difficulty levels, subjects perform better with the feedback condition combining tactile and visual cues. As a matter of fact, the majority of the subjects in the study preferred this combined feedback because redundant feedback reassures subjects in their actions. Based on this work, we can infer that multimodal haptic feedback improves subjects' performance and confidence during exploratory procedures.


2008 ◽  
Vol 2008 ◽  
pp. 1-11 ◽  
Author(s):  
Ki-Uk Kyung ◽  
Jun-Young Lee ◽  
Junseok Park

This paper presents a haptic stylus interface with a built-in compact tactile display module and an impact module as well as empirical studies on Braille, button, and texture display. We describe preliminary evaluations verifying the tactile display's performance indicating that it can satisfactorily represent Braille numbers for both the normal and the blind. In order to prove haptic feedback capability of the stylus, an experiment providing impact feedback mimicking the click of a button has been conducted. Since the developed device is small enough to be attached to a force feedback device, its applicability to combined force and tactile feedback display in a pen-held haptic device is also investigated. The handle of pen-held haptic interface was replaced by the pen-like interface to add tactile feedback capability to the device. Since the system provides combination of force, tactile and impact feedback, three haptic representation methods for texture display have been compared on surface with 3 texture groups which differ in direction, groove width, and shape. In addition, we evaluate its capacity to support touch screen operations by providing tactile sensations when a user rubs against an image displayed on a monitor.


2018 ◽  
Author(s):  
Hellen van Rees ◽  
◽  
Angelika Mader ◽  
Merlijn Smits ◽  
Geke Ludden ◽  
...  

Author(s):  
E. Willuth ◽  
S. F. Hardon ◽  
F. Lang ◽  
C. M. Haney ◽  
E. A. Felinska ◽  
...  

Abstract Background Robotic-assisted surgery (RAS) potentially reduces workload and shortens the surgical learning curve compared to conventional laparoscopy (CL). The present study aimed to compare robotic-assisted cholecystectomy (RAC) to laparoscopic cholecystectomy (LC) in the initial learning phase for novices. Methods In a randomized crossover study, medical students (n = 40) in their clinical years performed both LC and RAC on a cadaveric porcine model. After standardized instructions and basic skill training, group 1 started with RAC and then performed LC, while group 2 started with LC and then performed RAC. The primary endpoint was surgical performance measured with Objective Structured Assessment of Technical Skills (OSATS) score, secondary endpoints included operating time, complications (liver damage, gallbladder perforations, vessel damage), force applied to tissue, and subjective workload assessment. Results Surgical performance was better for RAC than for LC for total OSATS (RAC = 77.4 ± 7.9 vs. LC = 73.8 ± 9.4; p = 0.025, global OSATS (RAC = 27.2 ± 1.0 vs. LC = 26.5 ± 1.6; p = 0.012, and task specific OSATS score (RAC = 50.5 ± 7.5 vs. LC = 47.1 ± 8.5; p = 0.037). There were less complications with RAC than with LC (10 (25.6%) vs. 26 (65.0%), p = 0.006) but no difference in operating times (RAC = 77.0 ± 15.3 vs. LC = 75.5 ± 15.3 min; p = 0.517). Force applied to tissue was similar. Students found RAC less physical demanding and less frustrating than LC. Conclusions Novices performed their first cholecystectomies with better performance and less complications with RAS than with CL, while operating time showed no differences. Students perceived less subjective workload for RAS than for CL. Unlike our expectations, the lack of haptic feedback on the robotic system did not lead to higher force application during RAC than LC and did not increase tissue damage. These results show potential advantages for RAS over CL for surgical novices while performing their first RAC and LC using an ex vivo cadaveric porcine model. Registration number researchregistry6029 Graphic abstract


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Maximilian Neidhardt ◽  
Nils Gessert ◽  
Tobias Gosau ◽  
Julia Kemmling ◽  
Susanne Feldhaus ◽  
...  

AbstractMinimally invasive robotic surgery offer benefits such as reduced physical trauma, faster recovery and lesser pain for the patient. For these procedures, visual and haptic feedback to the surgeon is crucial when operating surgical tools without line-of-sight with a robot. External force sensors are biased by friction at the tool shaft and thereby cannot estimate forces between tool tip and tissue. As an alternative, vision-based force estimation was proposed. Here, interaction forces are directly learned from deformation observed by an external imaging system. Recently, an approach based on optical coherence tomography and deep learning has shown promising results. However, most experiments are performed on ex-vivo tissue. In this work, we demonstrate that models trained on dead tissue do not perform well in in vivo data. We performed multiple experiments on a human tumor xenograft mouse model, both on in vivo, perfused tissue and dead tissue. We compared two deep learning models in different training scenarios. Training on perfused, in vivo data improved model performance by 24% for in vivo force estimation.


Sign in / Sign up

Export Citation Format

Share Document