Goldeye: Enhanced Spatial Awareness for the Visually Impaired using Mixed Reality and Vibrotactile Feedback

2021 ◽  
Author(s):  
Jun Yao Francis Lee ◽  
Narayanan Rajeev ◽  
Anand Bhojan
1988 ◽  
Vol 82 (5) ◽  
pp. 188-192 ◽  
Author(s):  
D.L. Chin

The purpose of the study presented here was to investigate the effects of instruction in dance movement on the spatial awareness of visually impaired elementary students. Sixteen visually impaired students were randomly assigned to two groups. Eight students participated in physical education and received no dance instruction. Eight students received dance instruction in addition to physical education. The Hill Performance Test of Selected Positional Concepts was administered before and after the treatment period. An analysis of variance revealed significant main effects. A Scheffé analysis revealed a significant difference between the pretest and posttest scores of the group that received dance instruction.


Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6275
Author(s):  
Santiago Real ◽  
Alvaro Araujo

Herein, we describe the Virtually Enhanced Senses (VES) system, a novel and highly configurable wireless sensor-actuator network conceived as a development and test-bench platform of navigation systems adapted for blind and visually impaired people. It allows to immerse its users into “walkable” purely virtual or mixed environments with simulated sensors and validate navigation system designs prior to prototype development. The haptic, acoustic, and proprioceptive feedback supports state-of-art sensory substitution devices (SSD). In this regard, three SSD were integrated in VES as examples, including the well-known “The vOICe”. Additionally, the data throughput, latency and packet loss of the wireless communication can be controlled to observe its impact in the provided spatial knowledge and resulting mobility and orientation performance. Finally, the system has been validated by testing a combination of two previous visual-acoustic and visual-haptic sensory substitution schemas with 23 normal-sighted subjects. The recorded data includes the output of a “gaze-tracking” utility adapted for SSD.


2014 ◽  
Vol 8 (2) ◽  
pp. 77-94 ◽  
Author(s):  
Juan D. Gomez ◽  
Guido Bologna ◽  
Thierry Pun

Purpose – The purpose of this paper is to overcome the limitations of sensory substitution methods (SSDs) to represent high-level or conceptual information involved in vision, which are mainly produced by the biological sensory mismatch between sight and substituting senses. Thus, provide the visually impaired with a more practical and functional SSD. Design/methodology/approach – Unlike any other approach, the SSD extends beyond a sensing prototype, by integrating computer vision methods to produce reliable knowledge about the physical world (at the lowest cost to the user). Importantly though, the authors do not abandon the typical encoding of low-level features into sound. The paper simply argues that any visual perception can be achieved through hearing needs to be reinforced or enhanced by techniques that lie beyond mere visual-to-audio mapping (e.g. computer vision, image processing). Findings – Experiments reported in this paper reveal that the See ColOr is learnable and functional, and provides easy interaction. In moderate time, participants were enabled to grasp visual information of the world out of which they could derive: spatial awareness, ability to find someone, location of daily objects and skill to walk safely avoiding obstacles. The encouraging results open a door toward autonomous mobility of the blind. Originality/value – The paper uses the “extended” approach to introduce and justify that the system is brand new, as well as the experimental studies on computer-vision extension of SSDs that are presented. Also, this is the first paper reporting on a terminated, integrated and functional system.


2007 ◽  
Vol 13 (1) ◽  
pp. 51-58 ◽  
Author(s):  
Dimitrios Tzovaras ◽  
Konstantinos Moustakas ◽  
Georgios Nikolakis ◽  
Michael G. Strintzis

2015 ◽  
Vol 9 (2) ◽  
pp. 71-85
Author(s):  
Catherine Todd ◽  
Swati Mallya ◽  
Sara Majeed ◽  
Jude Rojas ◽  
Katy Naylor

Purpose – VirtuNav is a haptic-, audio-enabled virtual reality simulator that facilitates persons with visual impairment to explore a 3D computer model of a real-life indoor location, such as a room or building. The purpose of this paper is to aid in pre-planning and spatial awareness, for a user to become more familiar with the environment prior to experiencing it in reality. Design/methodology/approach – The system offers two unique interfaces: a free-roam interface where the user can navigate, and an edit mode where the administrator can manage test users, maps and retrieve test data. Findings – System testing reveals that spatial awareness and memory mapping improve with user iterations within VirtuNav. Research limitations/implications – VirtuNav is a research tool for investigation of user familiarity developed after repeated exposure to the simulator, to determine the extent to which haptic and/or sound cues improve a visually impaired user’s ability to navigate a room or building with or without occlusion. Social implications – The application may prove useful for greater real world engagement: to build confidence in real world experiences, enabling persons with sight impairment to more comfortably and readily explore and interact with environments formerly unfamiliar or unattainable to them. Originality/value – VirtuNav is developed as a practical application offering several unique features including map design, semi-automatic 3D map reconstruction and object classification from 2D map data. Visual and haptic rendering of real-time 3D map navigation are provided as well as automated administrative functions for shortest path determination, actual path comparison, and performance indicator assessment: exploration time taken and collision data.


Electronics ◽  
2021 ◽  
Vol 10 (22) ◽  
pp. 2794
Author(s):  
Mohammadreza Mirzaei ◽  
Peter Kán ◽  
Hannes Kaufmann

Sound source localization is important for spatial awareness and immersive Virtual Reality (VR) experiences. Deaf and Hard-of-Hearing (DHH) persons have limitations in completing sound-related VR tasks efficiently because they perceive audio information differently. This paper presents and evaluates a special haptic VR suit that helps DHH persons efficiently complete sound-related VR tasks. Our proposed VR suit receives sound information from the VR environment wirelessly and indicates the direction of the sound source to the DHH user by using vibrotactile feedback. Our study suggests that using different setups of the VR suit can significantly improve VR task completion times compared to not using a VR suit. Additionally, the results of mounting haptic devices on different positions of users’ bodies indicate that DHH users can complete a VR task significantly faster when two vibro-motors are mounted on their arms and ears compared to their thighs. Our quantitative and qualitative analysis demonstrates that DHH persons prefer using the system without the VR suit and prefer mounting vibro-motors in their ears. In an additional study, we did not find a significant difference in task completion time when using four vibro-motors with the VR suit compared to using only two vibro-motors in users’ ears without the VR suit.


2020 ◽  
Vol 10 (2) ◽  
pp. 523
Author(s):  
Santiago Real ◽  
Alvaro Araujo

In this paper, the Virtually Enhanced Senses (VES) System is described. It is an ARCore-based, mixed-reality system meant to assist blind and visually impaired people’s navigation. VES operates in indoor and outdoor environments without any previous in-situ installation. It provides users with specific, runtime-configurable stimuli according to their pose, i.e., position and orientation, and the information of the environment recorded in a virtual replica. It implements three output data modalities: Wall-tracking assistance, acoustic compass, and a novel sensory substitution algorithm, Geometry-based Virtual Acoustic Space (GbVAS). The multimodal output of this algorithm takes advantage of natural human perception encoding of spatial data. Preliminary experiments of GbVAS have been conducted with sixteen subjects in three different scenarios, demonstrating basic orientation and mobility skills after six minutes training.


Sign in / Sign up

Export Citation Format

Share Document