scholarly journals Reaching to sounds in virtual reality: A multisensory-motor approach to re-learn sound localisation

2020 ◽  
Author(s):  
Chiara Valzolgher ◽  
Grègoire Verdelet ◽  
Romeo Salemme ◽  
Luigi Lombardi ◽  
Valerie Gaveau ◽  
...  

ABSTRACTWhen localising sounds in space the brain relies on internal models that specify the correspondence between the auditory input reaching the ears and initial head-position with coordinates in external space. These models can be updated throughout life, setting the basis for re-learning spatial hearing abilities in adulthood. This is particularly important for individuals who experience long-term auditory alterations (e.g., hearing loss, hearing aids, cochlear implants) as well as individuals who have to adapt to novel auditory cues when listening in virtual auditory environments. Until now, several methodological constraints have limited our understanding of the mechanisms involved in spatial hearing re-learning. In particular, the potential role of active listening and head-movements have remained largely overlooked. Here, we overcome these limitations by using a novel methodology, based on virtual reality and real-time kinematic tracking, to study the role of active multisensory-motor interactions with sounds in the updating of sound-space correspondences. Participants were immersed in a virtual reality scenario showing 17 speakers at ear-level. From each visible speaker a free-field real sound could be generated. Two separate groups of participants localised the sound source either by reaching or naming the perceived sound source, under binaural or monaural listening. Participants were free to move their head during the task and received audio-visual feedback on their performance. Results showed that both groups compensated rapidly for the short-term auditory alteration caused by monaural listening, improving sound localisation performance across trials. Crucially, compared to naming, reaching the sounds induced faster and larger sound localisation improvements. Furthermore, more accurate sound localisation was accompanied by progressively wider head-movements. These two measures were significantly correlated selectively for the Reaching group. In conclusion, reaching to sounds in an immersive visual VR context proved most effective for updating altered spatial hearing. Head movements played an important role in this fast updating, pointing to the importance of active listening when implementing training protocols for improving spatial hearing.HIGHLIGHTS- We studied spatial hearing re-learning using virtual reality and kinematic tracking- Audio-visual feedback combined with active listening improved monaural sound localisation- Reaching to sounds improved performance more than naming sounds- Monaural listening triggered compensatory head-movement behaviour- Head-movement behaviour correlated with re-learning only when reaching to sounds

2020 ◽  
Author(s):  
V. Gaveau ◽  
A. Coudert ◽  
R. Salemme ◽  
E. Koun ◽  
C. Desoche ◽  
...  

AbstractIn everyday life, localizing a sound source in free-field entails more than the sole extraction of monaural and binaural auditory cues to define its location in the three-dimensions (azimuth, elevation and distance). In spatial hearing, we also take into account all the available visual information (e.g., cues to sound position, cues to the structure of the environment), and we resolve perceptual ambiguities through active listening behavior, exploring the auditory environment with head or/and body movements. Here we introduce a novel approach to sound localization in 3D named SPHERE (European patent n° WO2017203028A1), which exploits a commercially available Virtual Reality Head-mounted display system with real-time kinematic tracking to combine all of these elements (controlled positioning of a real sound source and recording of participants’ responses in 3D, controlled visual stimulations and active listening behavior). We prove that SPHERE allows accurate sampling of the 3D spatial hearing abilities of normal hearing adults, and it allowed detecting and quantifying the contribution of active listening. Specifically, comparing static vs. free head-motion during sound emission we found an improvement of sound localization accuracy and precisions. By combining visual virtual reality, real-time kinematic tracking and real-sound delivery we have achieved a novel approach to the study of spatial hearing, with the potentials to capture real-life behaviors in laboratory conditions. Furthermore, our new approach also paves the way for clinical and industrial applications that will leverage the full potentials of active listening and multisensory stimulation intrinsic to the SPHERE approach for the purpose rehabilitation and product assessment.


1999 ◽  
Vol 58 (3) ◽  
pp. 170-179 ◽  
Author(s):  
Barbara S. Muller ◽  
Pierre Bovet

Twelve blindfolded subjects localized two different pure tones, randomly played by eight sound sources in the horizontal plane. Either subjects could get information supplied by their pinnae (external ear) and their head movements or not. We found that pinnae, as well as head movements, had a marked influence on auditory localization performance with this type of sound. Effects of pinnae and head movements seemed to be additive; the absence of one or the other factor provoked the same loss of localization accuracy and even much the same error pattern. Head movement analysis showed that subjects turn their face towards the emitting sound source, except for sources exactly in the front or exactly in the rear, which are identified by turning the head to both sides. The head movement amplitude increased smoothly as the sound source moved from the anterior to the posterior quadrant.


2020 ◽  
Vol 24 ◽  
pp. 233121652094839 ◽  
Author(s):  
Virginia Best ◽  
Robert Baumgartner ◽  
Mathieu Lavandier ◽  
Piotr Majdak ◽  
Norbert Kopčo

Sound externalization, or the perception that a sound source is outside of the head, is an intriguing phenomenon that has long interested psychoacousticians. While previous reviews are available, the past few decades have produced a substantial amount of new data.In this review, we aim to synthesize those data and to summarize advances in our understanding of the phenomenon. We also discuss issues related to the definition and measurement of sound externalization and describe quantitative approaches that have been taken to predict the outcomes of externalization experiments. Last, sound externalization is of practical importance for many kinds of hearing technologies. Here, we touch on two examples, discussing the role of sound externalization in augmented/virtual reality systems and bringing attention to the somewhat overlooked issue of sound externalization in wearers of hearing aids.


2020 ◽  
Vol 25 (3) ◽  
pp. 643-661
Author(s):  
Belén Agulló ◽  
Anna Matamala

Immersive content has become a popular medium for storytelling. This type of content is typically accessed via a head-mounted visual display within which the viewer is located at the center of the action with the freedom to look around and explore the scene. The criteria for subtitle position for immersive media still need to be defined. Guiding mechanisms are necessary for circumstances in which the speakers are not visible and viewers, lacking an audio cue, require visual information to guide them through the virtual scene. The aim of this reception study is to compare different subtitling strategies: always-visible position to fixed-position and arrows to radar. To do this, feedback on preferences, immersion (using the ipq questionnaire) and head movements was gathered from 40 participants (20 hearing and 20 hard of hearing). Results show that always-visible subtitles with arrows are the preferred option. Always-visible and arrows achieved higher scores in the ipq questionnaire than fixed-position and radar. Head-movement patterns show that participants move more freely when the subtitles are always-visible than when they are in a fixed position, meaning that with always-visible subtitles the experience is more realistic, because the viewers do not feel constrained by the implementation of subtitles.


2017 ◽  
Author(s):  
Nicholas A. Del Grosso ◽  
Justin J. Graboski ◽  
Weiwei Chen ◽  
Eduardo Blanco-Hernández ◽  
Anton Sirota

ABSTRACTSpatial navigation, active sensing, and most cognitive functions rely on a tight link between motor output and sensory input. Virtual reality (VR) systems simulate the sensorimotor loop, allowing flexible manipulation of enriched sensory input. Conventional rodent VR systems provide 3D visual cues linked to restrained locomotion on a treadmill, leading to a mismatch between visual and most other sensory inputs, sensory-motor conflicts, as well as restricted naturalistic behavior. To rectify these limitations, we developed a VR system (ratCAVE) that provides realistic and low-latency visual feedback directly to head movements of completely unrestrained rodents. Immersed in this VR system, rats displayed naturalistic behavior by spontaneously interacting with and hugging virtual walls, exploring virtual objects, and avoiding virtual cliffs. We further illustrate the effect of ratCAVE-VR manipulation on hippocampal place fields. The newly-developed methodology enables a wide range of experiments involving flexible manipulation of visual feedback in freely-moving behaving animals.


2004 ◽  
Vol 63 (3) ◽  
pp. 143-149 ◽  
Author(s):  
Fred W. Mast ◽  
Charles M. Oman

The role of top-down processing on the horizontal-vertical line length illusion was examined by means of an ambiguous room with dual visual verticals. In one of the test conditions, the subjects were cued to one of the two verticals and were instructed to cognitively reassign the apparent vertical to the cued orientation. When they have mentally adjusted their perception, two lines in a plus sign configuration appeared and the subjects had to evaluate which line was longer. The results showed that the line length appeared longer when it was aligned with the direction of the vertical currently perceived by the subject. This study provides a demonstration that top-down processing influences lower level visual processing mechanisms. In another test condition, the subjects had all perceptual cues available and the influence was even stronger.


2013 ◽  
Author(s):  
Susanne Mayr ◽  
Gunnar Regenbrecht ◽  
Kathrin Lange ◽  
Albertgeorg Lang ◽  
Axel Buchner

2013 ◽  
Author(s):  
Agoston Torok ◽  
Daniel Mestre ◽  
Ferenc Honbolygo ◽  
Pierre Mallet ◽  
Jean-Marie Pergandi ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document