scholarly journals Comparison of sound localization performance between virtual and real three-dimensional immersive sound field

2009 ◽  
Vol 30 (3) ◽  
pp. 216-219 ◽  
Author(s):  
Dae-Gee Kang ◽  
Yukio Iwaya ◽  
Ryota Miyauchi ◽  
Yôiti Suzuki
1984 ◽  
Vol 52 (5) ◽  
pp. 819-847 ◽  
Author(s):  
W. M. Jenkins ◽  
M. M. Merzenich

Small lesions designed to completely destroy the cortical zone of representation of a restricted band of frequency were introduced within the primary auditory cortex (AI) in adult cats. Physiological mapping was used to guide placement of lesions. Sound-localization performance was evaluated prior to and after induction of these lesions in a seven-choice free-sound-field apparatus. All tested cats had profound contralateral hemifield deficits for the localization of brief tones at frequencies roughly corresponding to those whose representations were destroyed by the lesion. Sound-localization performance was normal at all other test frequencies. In a single adult cat, a massive lesion destroyed nearly all auditory cortex unilaterally, with only the representation of a narrow band of frequency within AI spared by the lesion. This cat had normal abilities for azimuthal sound localization across that frequency band but a profound contralateral deficit for the azimuthal localization of brief sounds at all other frequencies. Recorded sound-localization deficits were permanent. Localization of long-duration tones was not affected by a unilateral AI lesion. These studies indicate that, at least in cats, AI is necessary for normal binaural sound-localization behavior; among auditory cortical fields, AI is sufficient for normal binaural sound-localization behavior; sound-location representation is organized by frequency channel in the auditory forebrain; and AI in each hemisphere contributes to only contralateral free-sound-field location representation.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3674 ◽  
Author(s):  
Wei Lu ◽  
Yu Lan ◽  
Rongzhen Guo ◽  
Qicheng Zhang ◽  
Shichang Li ◽  
...  

A spiral sound wave transducer comprised of longitudinal vibrating elements has been proposed. This transducer was made from eight uniform radial distributed longitudinal vibrating elements, which could effectively generate low frequency underwater acoustic spiral waves. We discuss the production theory of spiral sound waves, which could be synthesized by two orthogonal acoustic dipoles with a phase difference of 90 degrees. The excitation voltage distribution of the transducer for emitting a spiral sound wave and the measurement method for the transducer is given. Three-dimensional finite element modeling (FEM)of the transducer was established for simulating the vibration modes and the acoustic characteristics of the transducers. Further, we fabricated a spiral sound wave transducer based on our design and simulations. It was found that the resonance frequency of the transducer was 10.8 kHz and that the transmitting voltage resonance was 140.5 dB. The underwater sound field measurements demonstrate that our designed transducer based on the longitudinal elements could successfully generate spiral sound waves.


Acta Acustica ◽  
2020 ◽  
Vol 5 ◽  
pp. 3
Author(s):  
Aida Hejazi Nooghabi ◽  
Quentin Grimal ◽  
Anthony Herrel ◽  
Michael Reinwald ◽  
Lapo Boschi

We implement a new algorithm to model acoustic wave propagation through and around a dolphin skull, using the k-Wave software package [1]. The equation of motion is integrated numerically in a complex three-dimensional structure via a pseudospectral scheme which, importantly, accounts for lateral heterogeneities in the mechanical properties of bone. Modeling wave propagation in the skull of dolphins contributes to our understanding of how their sound localization and echolocation mechanisms work. Dolphins are known to be highly effective at localizing sound sources; in particular, they have been shown to be equally sensitive to changes in the elevation and azimuth of the sound source, while other studied species, e.g. humans, are much more sensitive to the latter than to the former. A laboratory experiment conducted by our team on a dry skull [2] has shown that sound reverberated in bones could possibly play an important role in enhancing localization accuracy, and it has been speculated that the dolphin sound localization system could somehow rely on the analysis of this information. We employ our new numerical model to simulate the response of the same skull used by [2] to sound sources at a wide and dense set of locations on the vertical plane. This work is the first step towards the implementation of a new tool for modeling source (echo)location in dolphins; in future work, this will allow us to effectively explore a wide variety of emitted signals and anatomical features.


Author(s):  
E. Fanina

A set of experimental studies is carried out to determine the acoustic characteristics of three-dimensional panels of fixed thickness made of carbon-based composite material installed in the opening between the reverberation chambers. Sound insulation indices are determined when they are excited by a diffuse sound field in wide frequency ranges. The reverberation time in model chambers with different partition configurations is calculated. The optimal configuration of the partition with pyramidal cells to reduce the reverberation time in the rooms is determined. The use of graphite in the form of thin membrane applied to various surfaces can significantly reduce the sound pressure levels in the room and increase the sound insulation indices of air noise. In addition to thin membrane, graphite can be used as an additive in composite materials for sound insulation purposes. It is shown that the characteristics of such panels are quite universal. The measured acoustic characteristics of composite panels are compared with similar characteristics of traditional materials. It is determined that the composition belongs to the I group of fire-retardant efficiency and can be recommended for use as a fire-retardant material. The developed acoustic material is an effective absorbing agent that solves problems in architectural acoustics, echo cancellation in construction and architecture. Similar to metamaterials, natural and artificial graphites allow to solve these problems with small volumes and masses using simple and inexpensive technologies.


2021 ◽  
Vol 2 ◽  
Author(s):  
Thirsa Huisman ◽  
Axel Ahrens ◽  
Ewen MacDonald

To reproduce realistic audio-visual scenarios in the laboratory, Ambisonics is often used to reproduce a sound field over loudspeakers and virtual reality (VR) glasses are used to present visual information. Both technologies have been shown to be suitable for research. However, the combination of both technologies, Ambisonics and VR glasses, might affect the spatial cues for auditory localization and thus, the localization percept. Here, we investigated how VR glasses affect the localization of virtual sound sources on the horizontal plane produced using either 1st-, 3rd-, 5th- or 11th-order Ambisonics with and without visual information. Results showed that with 1st-order Ambisonics the localization error is larger than with the higher orders, while the differences across the higher orders were small. The physical presence of the VR glasses without visual information increased the perceived lateralization of the auditory stimuli by on average about 2°, especially in the right hemisphere. Presenting visual information about the environment and potential sound sources did reduce this HMD-induced shift, however it could not fully compensate for it. While the localization performance itself was affected by the Ambisonics order, there was no interaction between the Ambisonics order and the effect of the HMD. Thus, the presence of VR glasses can alter acoustic localization when using Ambisonics sound reproduction, but visual information can compensate for most of the effects. As such, most use cases for VR will be unaffected by these shifts in the perceived location of the auditory stimuli.


Author(s):  
Snandan Sharma ◽  
Waldo Nogueira ◽  
A. John van Opstal ◽  
Josef Chalupper ◽  
Lucas H. M. Mens ◽  
...  

Purpose Speech understanding in noise and horizontal sound localization is poor in most cochlear implant (CI) users with a hearing aid (bimodal stimulation). This study investigated the effect of static and less-extreme adaptive frequency compression in hearing aids on spatial hearing. By means of frequency compression, we aimed to restore high-frequency audibility, and thus improve sound localization and spatial speech recognition. Method Sound-detection thresholds, sound localization, and spatial speech recognition were measured in eight bimodal CI users, with and without frequency compression. We tested two compression algorithms: a static algorithm, which compressed frequencies beyond the compression knee point (160 or 480 Hz), and an adaptive algorithm, which aimed to compress only consonants leaving vowels unaffected (adaptive knee-point frequencies from 736 to 2946 Hz). Results Compression yielded a strong audibility benefit (high-frequency thresholds improved by 40 and 24 dB for static and adaptive compression, respectively), no meaningful improvement in localization performance (errors remained > 30 deg), and spatial speech recognition across all participants. Localization biases without compression (toward the hearing-aid and implant side for low- and high-frequency sounds, respectively) disappeared or reversed with compression. The audibility benefits provided to each bimodal user partially explained any individual improvements in localization performance; shifts in bias; and, for six out of eight participants, benefits in spatial speech recognition. Conclusions We speculate that limiting factors such as a persistent hearing asymmetry and mismatch in spectral overlap prevent compression in bimodal users from improving sound localization. Therefore, the benefit in spatial release from masking by compression is likely due to a shift of attention to the ear with the better signal-to-noise ratio facilitated by compression, rather than an improved spatial selectivity. Supplemental Material https://doi.org/10.23641/asha.16869485


2014 ◽  
Vol 25 (09) ◽  
pp. 791-803 ◽  
Author(s):  
Evelyne Carette ◽  
Tim Van den Bogaert ◽  
Mark Laureyns ◽  
Jan Wouters

Background: Several studies have demonstrated negative effects of directional microphone configurations on left-right and front-back (FB) sound localization. New processing schemes, such as frequency-dependent directionality and front focus with wireless ear-to-ear communication in recent, commercial hearing aids may preserve the binaural cues necessary for left-right localization and may introduce useful spectral cues necessary for FB disambiguation. Purpose: In this study, two hearing aids with different processing schemes, which were both designed to preserve the ability to localize sounds in the horizontal plane (left-right and FB), were compared. Research Design: We compared horizontal (left-right and FB) sound localization performance of hearing aid users fitted with two types of behind-the-ear (BTE) devices. The first type of BTE device had four different programs that provided (1) no directionality, (2–3) symmetric frequency-dependent directionality, and (4) an asymmetric configuration. The second pair of BTE devices was evaluated in its omnidirectional setting. This setting automatically activates a soft forward-oriented directional scheme that mimics the pinna effect. Also, wireless communication between the hearing aids was present in this configuration (5). A broadband stimulus was used as a target signal. The directional hearing abilities of the listeners were also evaluated without hearing aids as a reference. Study Sample: A total of 12 listeners with moderate to severe hearing loss participated in this study. All were experienced hearing-aid users. As a reference, 11 listeners with normal hearing participated. Data Collection and Analysis: The participants were positioned in a 13-speaker array (left-right, –90°/+90°) or 7-speaker array (FB, 0–180°) and were asked to report the number of the loudspeaker located the closest to where the sound was perceived. The root mean square error was calculated for the left-right experiment, and the percentage of FB errors was used as a FB performance measure. Results were analyzed with repeated-measures analysis of variance. Results: For the left-right localization task, no significant differences could be proven between the unaided condition and both partial directional schemes and the omnidirectional scheme. The soft forward-oriented system and the asymmetric system did show a detrimental effect compared with the unaided condition. On average, localization was worst when users used the asymmetric condition. Analysis of the results of the FB experiment showed good performance, similar to unaided, with both the partial directional systems and the asymmetric configuration. Significantly worse performance was found with the omnidirectional and the omnidirectional soft forward-oriented BTE systems compared with the other hearing-aid systems. Conclusions: Bilaterally fitted partial directional systems preserve (part of) the binaural cues necessary for left-right localization and introduce, preserve, or enhance useful spectral cues that allow FB disambiguation. Omnidirectional systems, although good for left-right localization, do not provide the user with enough spectral information for an optimal FB localization performance.


Sign in / Sign up

Export Citation Format

Share Document