scholarly journals Context-dependent trading of binaural spatial cues in virtual reality

2019 ◽  
Vol 145 (3) ◽  
pp. 1871-1871
Author(s):  
Travis M. Moore ◽  
G. Christopher Stecker
2021 ◽  
Vol 2 ◽  
Author(s):  
Thirsa Huisman ◽  
Axel Ahrens ◽  
Ewen MacDonald

To reproduce realistic audio-visual scenarios in the laboratory, Ambisonics is often used to reproduce a sound field over loudspeakers and virtual reality (VR) glasses are used to present visual information. Both technologies have been shown to be suitable for research. However, the combination of both technologies, Ambisonics and VR glasses, might affect the spatial cues for auditory localization and thus, the localization percept. Here, we investigated how VR glasses affect the localization of virtual sound sources on the horizontal plane produced using either 1st-, 3rd-, 5th- or 11th-order Ambisonics with and without visual information. Results showed that with 1st-order Ambisonics the localization error is larger than with the higher orders, while the differences across the higher orders were small. The physical presence of the VR glasses without visual information increased the perceived lateralization of the auditory stimuli by on average about 2°, especially in the right hemisphere. Presenting visual information about the environment and potential sound sources did reduce this HMD-induced shift, however it could not fully compensate for it. While the localization performance itself was affected by the Ambisonics order, there was no interaction between the Ambisonics order and the effect of the HMD. Thus, the presence of VR glasses can alter acoustic localization when using Ambisonics sound reproduction, but visual information can compensate for most of the effects. As such, most use cases for VR will be unaffected by these shifts in the perceived location of the auditory stimuli.


2010 ◽  
Vol 14 (2) ◽  
pp. 269-277 ◽  
Author(s):  
Katherine Herborn ◽  
Lucille Alexander ◽  
Kathryn E. Arnold

Author(s):  
Yeon Soon Shin ◽  
Rolando Masís-Obando ◽  
Neggin Keshavarzian ◽  
Riya Dáve ◽  
Kenneth A. Norman

AbstractThe context-dependent memory effect, in which memory for an item is better when the retrieval context matches the original learning context, has proved to be difficult to reproduce in a laboratory setting. In an effort to identify a set of features that generate a robust context-dependent memory effect, we developed a paradigm in virtual reality using two semantically distinct virtual contexts: underwater and Mars environments, each with a separate body of knowledge (schema) associated with it. We show that items are better recalled when retrieved in the same context as the study context; we also show that the size of the effect is larger for items deemed context-relevant at encoding, suggesting that context-dependent memory effects may depend on items being integrated into an active schema.


Author(s):  
Tycho T. de Back ◽  
Angelica M. Tinga ◽  
Phong Nguyen ◽  
Max M. Louwerse

AbstractHow to make the learning of complex subjects engaging, motivating, and effective? The use of immersive virtual reality offers exciting, yet largely unexplored solutions to this problem. Taking neuroanatomy as an example of a visually and spatially complex subject, the present study investigated whether academic learning using a state-of-the-art Cave Automatic Virtual Environment (CAVE) yielded higher learning gains compared to conventional textbooks. The present study leveraged a combination of CAVE benefits including collaborative learning, rich spatial information, embodied interaction and gamification. Results indicated significantly higher learning gains after collaborative learning in the CAVE with large effect sizes compared to a textbook condition. Furthermore, low spatial ability learners benefitted most from the strong spatial cues provided by immersive virtual reality, effectively raising their performance to that of high spatial ability learners. The present study serves as a concrete example of the effective design and implementation of virtual reality in CAVE settings, demonstrating learning gains and thus opening opportunities to more pervasive use of immersive technologies for education. In addition, the study illustrates how immersive learning may provide novel scaffolds to increase performance in those who need it most.


Author(s):  
Jason A. Parker ◽  
Alexandra D. Kaplan ◽  
William G. Volante ◽  
Julian Abich ◽  
Valerie K. Sims

A virtual reality (VR) training system’s effectiveness is determined by how well the knowledge-and skills-gained in the virtual environment transfers to real-world performance. The purpose of this study was to examine the efficacy of virtual reality training by comparing semantic memorization in congruent (e.g., memorization task in VR and recognition task in VR) versus incongruent environments (e.g., memorization task in VR and recognition task in the real word). In the present study, we semi replicated Godden and Baddeley’s 1980 study on context-dependent recognition memory by using a photorealistic virtual reality environment in place of the underwater, scuba environment. Results revealed participants that learned semantic information in the virtual environment performed highly on the memory recognition task in the material, real-world environment (and vice versa). These findings replicate and extend Godden and Baddeley’s original results and provide evidence for the use of VR training to support semantic-based knowledge transfer.


2020 ◽  
Author(s):  
Yeon Soon Shin ◽  
Rolando Masís-Obando ◽  
Neggin Keshavarzian ◽  
Riya Davé ◽  
Kenneth Norman

The context-dependent memory effect, in which memory for an item is better when the retrieval context matches the original learning context, has proved to be difficult to reproduce in a laboratory setting. In an effort to identify a set of features that generate a robust context-dependent memory effect, we developed a paradigm in virtual reality using two semantically distinct virtual contexts: underwater and Mars environments, each with a separate body of knowledge (schema) associated with it. We show that items are better recalled when retrieved in the same context as the study context; we also show that the size of the effect is larger for items deemed context-relevant at encoding, highlighting the importance of integrating items into an active schema in generating this effect.


2021 ◽  
Author(s):  
Thirsa Huisman ◽  
Axel Ahrens ◽  
Ewen MacDonald

To reproduce realistic audio-visual scenarios in the laboratory, ambisonics is often used to reproduce a sound field over loudspeakers and virtual reality (VR) glasses are used to present visual information. Both technologies have been shown to be suitable for research. However, the combination of both technologies, ambisonics and VR glasses, might affect the spatial cues for auditory localization and thus, the localization percept. Here, we investigated how VR glasses affect the localization of virtual sound sources on the horizontal plane produced using either 1st, 3rd, 5th or 11th order ambisonics with and without visual information. Results showed that with 1st order ambisonics the localization error is larger than with the higher orders, while the differences across the higher orders were small.The physical presence of the VR glasses without visual information increased the perceived lateralization of the auditory stimuli by on average about 2°, especially in the right hemisphere. Presenting visual information about the environment and potential sound sources did reduce this HMD-induced shift, however it could not fully compensate for it. While the localization performance itself was affected by the ambisonics order, there was no interaction between the ambisonics order and the effect of the HMD. Thus, the presence of VR glasses can alter acoustic localization when using ambisonics sound reproduction, but visual information can compensate for most of the effects. As such, most use cases for VR will be unaffected by these shifts in the perceived location of auditory stimuli.


2021 ◽  
Vol 11 (23) ◽  
pp. 11510
Author(s):  
Hannah Park ◽  
Nafiseh Faghihi ◽  
Manish Dixit ◽  
Jyotsna Vaid ◽  
Ann McNamara

Emerging technologies offer the potential to expand the domain of the future workforce to extreme environments, such as outer space and alien terrains. To understand how humans navigate in such environments that lack familiar spatial cues this study examined spatial perception in three types of environments. The environments were simulated using virtual reality. We examined participants’ ability to estimate the size and distance of stimuli under conditions of minimal, moderate, or maximum visual cues, corresponding to an environment simulating outer space, an alien terrain, or a typical cityscape, respectively. The findings show underestimation of distance in both the maximum and the minimum visual cue environment but a tendency for overestimation of distance in the moderate environment. We further observed that depth estimation was substantially better in the minimum environment than in the other two environments. However, estimation of height was more accurate in the environment with maximum cues (cityscape) than the environment with minimum cues (outer space). More generally, our results suggest that familiar visual cues facilitated better estimation of size and distance than unfamiliar cues. In fact, the presence of unfamiliar, and perhaps misleading visual cues (characterizing the alien terrain environment), was more disruptive than an environment with a total absence of visual cues for distance and size perception. The findings have implications for training workers to better adapt to extreme environments.


Sign in / Sign up

Export Citation Format

Share Document