scholarly journals Investigating the Effect of Sensory Concurrency on Learning Haptic Spatiotemporal Signals

Author(s):  
Iain Carson ◽  
Aaron Quigley ◽  
Loraine Clarke ◽  
Uta Hinrichs

A new generation of multimodal interfaces and interactions is emerging. Drawing on the principles of Sensory Substitution and Augmentation Devices (SSADs), these new interfaces offer the potential for rich, immersive human-computer interactions, but are difficult to design well, and take time to master, creating significant barriers towards wider adoption. Following a review of the literature surrounding existing SSADs, their metrics for success and their growing influence on interface design in Human Computer Interaction, we present a medium term (4-day) study comparing the effectiveness of various combinations of visual and haptic feedback (sensory concurrencies) in preparing users to perform a virtual maze navigation task using haptic feedback alone. Participants navigated 12 mazes in each of 3 separate sessions under a specific combination of visual and haptic feedback, before performing the same task using the haptic feedback alone. Visual sensory deprivation was shown to be inferior to visual & haptic concurrency in enabling haptic signal comprehension, while a new hybridized condition combining reduced visual feedback with the haptic signal was shown to be superior. Potential explanations for the effectiveness of the hybrid mechanism are explored, and the scope and implications of its generalization to new sensory interfaces is presented.

2021 ◽  
Author(s):  
Nordine Sebkhi ◽  
Md Nazmus Sahadat ◽  
Erica Walling ◽  
Michelle Hoefnagel ◽  
Fulcher Chris ◽  
...  

The multimodal Tongue Drive System (mTDS) is an assistive technology for people with tetraplegia that provides an alternative method to interact with a computer by combining tongue control, head gesture, and speech. This multimodality is designed to facilitate the completion of complex computer tasks (e.g. drag-and-drop) that cannot be easily performed by existing uni-modal assistive technologies. Previous studies with able-bodied participants showed promising performance of the mTDS on complex tasks when compared to other input methods such as keyboard and mouse. In this three-session pilot study, the primary objective is to show the feasibility of using mTDS to facilitate human-computer interactions by asking fourteen participants with tetraplegia to complete five computer access tasks with increased level of complexity: maze navigation, center-out tapping, playing bubble shooter and peg solitaire, and sending an email. Speed and accuracy are quantified by key metrics that are found to be generally increasing from the first to third session, indicating the potential existence of a learning phase that could result in improved performance over time.


2006 ◽  
Vol 5 (2) ◽  
pp. 37-44 ◽  
Author(s):  
Paul Richard ◽  
Damien Chamaret ◽  
François-Xavier Inglese ◽  
Philippe Lucidarme ◽  
Jean-Louis Ferrier

This paper presents a human-scale virtual environment (VE) with haptic feedback along with two experiments performed in the context of product design. The user interacts with a virtual mock-up using a large-scale bimanual string-based haptic interface called SPIDAR (Space Interface Device for Artificial Reality). An original self-calibration method is proposed. A vibro-tactile glove was developed and integrated to the SPIDAR to provide tactile cues to the operator. The purpose of the first experiment was: (1) to examine the effect of tactile feedback in a task involving reach-and-touch of different parts of a digital mock-up, and (2) to investigate the use of sensory substitution in such tasks. The second experiment aimed to investigate the effect of visual and auditory feedback in a car-light maintenance task. Results of the first experiment indicate that the users could easily and quickly access and finely touch the different parts of the digital mock-up when sensory feedback (either visual, auditory, or tactile) was present. Results of the of the second experiment show that visual and auditory feedbacks improve average placement accuracy by about 54 % and 60% respectively compared to the open loop case


Author(s):  
So Young Kim ◽  
Neeraja Subrahmaniyan ◽  
James D. Brooks

The use of remote-control locomotives has become prevalent in most major rail yards in North America. Despite their increased use, they are limited by the functionality and current design of the operator control unit. Human factors research has identified interface design issues with the controller, emphasizing the need to rethink a new generation of remote-control units that can accommodate the growing needs of operational functionality through effective interface design. Towards that goal, we present the preliminary findings of an exploratory study comparing the functional effectiveness and usability of two types of remote-control modalities – a traditional gaming controller and a multi-touch tablet – to drive a locomotive. Initial findings indicate that the game controller modality is preferred over multi-touch, with low variation among participants. However, the preference of control mode (i.e., vehicle power or speed command input) was different for the two modalities. These initial findings are the first of their kind in identifying initial design considerations for future remote locomotive operation and in comparing the use of traditional gaming and multi-touch controllers.


1995 ◽  
Vol 5 (2) ◽  
pp. 67-73 ◽  
Author(s):  
Frances F. Jacobson

Examines the characteristics of bibliographic information retrieval systems, particularly online public access systems, in terms of the difficulties children have in using them. The specialized focus of library and information science, the highly abstract nature of bibliographic representation, and the evolving cognitive development of children are all contributing factors to these difficulties. Describes recent research and development in interface design, followed by implications for the design of Internet navigators. The new generation of Internet browsers can give students the ability not only to search for information, but also to create and disseminate information using the same medium. Such capacity adds a significant dimension and new meaning to the concept of information retrieval. Concludes that thoughtful and developmentally appropriate interface design is critical to the success of children′s use of this powerful new resource.


2004 ◽  
Vol 13 (1) ◽  
pp. 16-21 ◽  
Author(s):  
Bernd Petzold ◽  
Michael F. Zaeh ◽  
Berthold Faerber ◽  
Barbara Deml ◽  
Hans Egermeier ◽  
...  

Telepresent tasks involve removal of the human operator from an immediate working area and relocation to a remote environment that offers the operator all necessary control features. In this remote location, the operator must be provided with adequate feedback information such that the task at hand can be effectively executed. This research explores the effectiveness of various feedback methods. More specifically, graphical feedback in the form of video streamed images is compared against rendered 3D models, the overall effectiveness of haptic feedback is analyzed, and the influences of sensory augmentation and sensory substitution are examined. This study involved 48 participants, each of whom executed a simple clockwork assembly task under various feedback mechanisms. The results support the use of 3D models as opposed to live video streams for graphical presentation, utilization of haptic feedback (which was found to significantly enhance operation effectiveness), and the use of sensory augmentation and substitution under specific circumstances.


2007 ◽  
Vol 16 (5) ◽  
pp. 459-470 ◽  
Author(s):  
Hermann Mayer ◽  
Istvan Nagy ◽  
Alois Knoll ◽  
Eva U Braun ◽  
Robert Bauernschmitt ◽  
...  

The implementation of telemanipulator systems for cardiac surgery enabled heart surgeons to perform delicate minimally invasive procedures with high precision under stereoscopic view. At present, commercially available systems do not provide force-feedback or Cartesian control for the operating surgeon. The lack of haptic feedback may cause damage to tissue and can cause breaks of suture material. In addition, minimally invasive procedures are very tiring for the surgeon due to the need for visual compensation for the missing force feedback. While a lack of Cartesian control of the end effectors is acceptable for surgeons (because every movement is visually supervised), it prevents research on partial automation. In order to improve this situation, we have built an experimental telemanipulator for endoscopic surgery that provides both force-feedback (in order to improve the feeling of immersion) and Cartesian control as a prerequisite for automation. In this article, we focus on the inclusion of force feedback and its evaluation. We completed our first bimanual system in early 2003 (EndoPAR Endoscopic Partial Autonomous Robot). Each robot arm consists of a standard robot and a surgical instrument, hence providing eight DOF that enable free manipulation via trocar kinematics. Based on the experience with this system, we introduced an improved version in early 2005. The new ARAMIS system (Autonomous Robot Assisted Minimally Invasive Surgery) has four multi-purpose robotic arms mounted on a gantry above the working space. Again, the arms are controlled by two force-feedback devices, and 3D vision is provided. In addition, all surgical instruments have been equipped with strain gauge force sensors that can measure forces along all translational directions of the instrument's shaft. Force-feedback of this system was evaluated in a scenario of robotic heart surgery, which offers an impression very similar to the standard, open procedures with high immersion. It enables the surgeon to palpate arteriosclerosis, to tie surgical knots with real suture material, and to feel the rupture of suture material. Therefore, the hypothesis that haptic feedback in the form of sensory substitution facilitates performance of surgical tasks was evaluated on the experimental platform described in the article (on the EndoPAR version). In addition, a further hypothesis was explored: The high fatigue of surgeons during and after robotic operations may be caused by visual compensation due to the lack of force-feedback (Thompson, J., Ottensmeier, M., & Sheridan, T. 1999. Human Factors in Telesurgery, Telmed Journal, 5 (2) 129–137.).


Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1619
Author(s):  
Otilia Zvorișteanu ◽  
Simona Caraiman ◽  
Robert-Gabriel Lupu ◽  
Nicolae Alexandru Botezatu ◽  
Adrian Burlacu

For most visually impaired people, simple tasks such as understanding the environment or moving safely around it represent huge challenges. The Sound of Vision system was designed as a sensory substitution device, based on computer vision techniques, that encodes any environment in a naturalistic representation through audio and haptic feedback. The present paper presents a study on the usability of this system for visually impaired people in relevant environments. The aim of the study is to assess how well the system is able to help the perception and mobility of the visually impaired participants in real life environments and circumstances. The testing scenarios were devised to allow the assessment of the added value of the Sound of Vision system compared to traditional assistive instruments, such as the white cane. Various data were collected during the tests to allow for a better evaluation of the performance: system configuration, completion times, electro-dermal activity, video footage, user feedback. With minimal training, the system could be successfully used in outdoor environments to perform various perception and mobility tasks. The benefit of the Sound of Vision device compared to the white cane was confirmed by the participants and by the evaluation results to consist in: providing early feedback about static and dynamic objects, providing feedback about elevated objects, walls, negative obstacles (e.g., holes in the ground) and signs.


2021 ◽  
Author(s):  
Nordine Sebkhi ◽  
Md Nazmus Sahadat ◽  
Erica Walling ◽  
Michelle Hoefnagel ◽  
Fulcher Chris ◽  
...  

The multimodal Tongue Drive System (mTDS) is an assistive technology for people with tetraplegia that provides an alternative method to interact with a computer by combining tongue control, head gesture, and speech. This multimodality is designed to facilitate the completion of complex computer tasks (e.g. drag-and-drop) that cannot be easily performed by existing uni-modal assistive technologies. Previous studies with able-bodied participants showed promising performance of the mTDS on complex tasks when compared to other input methods such as keyboard and mouse. In this three-session pilot study, the primary objective is to show the feasibility of using mTDS to facilitate human-computer interactions by asking fourteen participants with tetraplegia to complete five computer access tasks with increased level of complexity: maze navigation, center-out tapping, playing bubble shooter and peg solitaire, and sending an email. Speed and accuracy are quantified by key metrics that are found to be generally increasing from the first to third session, indicating the potential existence of a learning phase that could result in improved performance over time.


Sign in / Sign up

Export Citation Format

Share Document