scholarly journals Accurate Sound Localization in Reverberant Environments Is Mediated by Robust Encoding of Spatial Cues in the Auditory Midbrain

Neuron ◽  
2009 ◽  
Vol 62 (1) ◽  
pp. 123-134 ◽  
Author(s):  
Sasha Devore ◽  
Antje Ihlefeld ◽  
Kenneth Hancock ◽  
Barbara Shinn-Cunningham ◽  
Bertrand Delgutte
2000 ◽  
Vol 83 (4) ◽  
pp. 2300-2314 ◽  
Author(s):  
U. Koch ◽  
B. Grothe

To date, most physiological studies that investigated binaural auditory processing have addressed the topic rather exclusively in the context of sound localization. However, there is strong psychophysical evidence that binaural processing serves more than only sound localization. This raises the question of how binaural processing of spatial cues interacts with cues important for feature detection. The temporal structure of a sound is one such feature important for sound recognition. As a first approach, we investigated the influence of binaural cues on temporal processing in the mammalian auditory system. Here, we present evidence that binaural cues, namely interaural intensity differences (IIDs), have profound effects on filter properties for stimulus periodicity of auditory midbrain neurons in the echolocating big brown bat, Eptesicus fuscus. Our data indicate that these effects are partially due to changes in strength and timing of binaural inhibitory inputs. We measured filter characteristics for the periodicity (modulation frequency) of sinusoidally frequency modulated sounds (SFM) under different binaural conditions. As criteria, we used 50% filter cutoff frequencies of modulation transfer functions based on discharge rate as well as synchronicity of discharge to the sound envelope. The binaural conditions were contralateral stimulation only, equal stimulation at both ears (IID = 0 dB), and more intense at the ipsilateral ear (IID = −20, −30 dB). In 32% of neurons, the range of modulation frequencies the neurons responded to changed considerably comparing monaural and binaural (IID =0) stimulation. Moreover, in ∼50% of neurons the range of modulation frequencies was narrower when the ipsilateral ear was favored (IID = −20) compared with equal stimulation at both ears (IID = 0). In ∼10% of the neurons synchronization differed when comparing different binaural cues. Blockade of the GABAergic or glycinergic inputs to the cells recorded from revealed that inhibitory inputs were at least partially responsible for the observed changes in SFM filtering. In 25% of the neurons, drug application abolished those changes. Experiments using electronically introduced interaural time differences showed that the strength of ipsilaterally evoked inhibition increased with increasing modulation frequencies in one third of the cells tested. Thus glycinergic and GABAergic inhibition is at least one source responsible for the observed interdependence of temporal structure of a sound and spatial cues.


2015 ◽  
Vol 114 (5) ◽  
pp. 2991-3001 ◽  
Author(s):  
Andrew D. Brown ◽  
Heath G. Jones ◽  
Alan Kan ◽  
Tanvi Thakkar ◽  
G. Christopher Stecker ◽  
...  

Normal-hearing human listeners and a variety of studied animal species localize sound sources accurately in reverberant environments by responding to the directional cues carried by the first-arriving sound rather than spurious cues carried by later-arriving reflections, which are not perceived discretely. This phenomenon is known as the precedence effect (PE) in sound localization. Despite decades of study, the biological basis of the PE remains unclear. Though the PE was once widely attributed to central processes such as synaptic inhibition in the auditory midbrain, a more recent hypothesis holds that the PE may arise essentially as a by-product of normal cochlear function. Here we evaluated the PE in a unique human patient population with demonstrated sensitivity to binaural information but without functional cochleae. Users of bilateral cochlear implants (CIs) were tested in a psychophysical task that assessed the number and location(s) of auditory images perceived for simulated source-echo (lead-lag) stimuli. A parallel experiment was conducted in a group of normal-hearing (NH) listeners. Key findings were as follows: 1) Subjects in both groups exhibited lead-lag fusion. 2) Fusion was marginally weaker in CI users than in NH listeners but could be augmented by systematically attenuating the amplitude of the lag stimulus to coarsely simulate adaptation observed in acoustically stimulated auditory nerve fibers. 3) Dominance of the lead in localization varied substantially among both NH and CI subjects but was evident in both groups. Taken together, data suggest that aspects of the PE can be elicited in CI users, who lack functional cochleae, thus suggesting that neural mechanisms are sufficient to produce the PE.


2021 ◽  
Vol 2 (2) ◽  
pp. 146-163
Author(s):  
Sebastian Schneider ◽  
Paul Wilhelm Dierkes

Locating a vocalizing animal can be useful in many fields of bioacoustics and behavioral research, and is often done in the wild, covering large areas. In zoos, however, the application of this method becomes particularly difficult, because, on the one hand, the animals are in a relatively small area and, on the other hand, reverberant environments and background noise complicate the analysis. Nevertheless, by localizing and analyzing animal sounds, valuable information on physiological state, sex, subspecies, reproductive state, social status, and animal welfare can be gathered. Therefore, we developed a sound localization software that is able to estimate the position of a vocalizing animal precisely, making it possible to assign the vocalization to the corresponding individual, even under difficult conditions. In this study, the accuracy and reliability of the software is tested under various conditions. Different vocalizations were played back through a loudspeaker and recorded with several microphones to verify the accuracy. In addition, tests were carried out under real conditions using the example of the giant otter enclosure at Dortmund Zoo, Germany. The results show that the software can estimate the correct position of a sound source with a high accuracy (median of the deviation 0.234 m). Consequently, this software could make an important contribution to basic research via position determination and the associated differentiation of individuals, and could be relevant in a long-term application for monitoring animal welfare in zoos.


2019 ◽  
Author(s):  
Daniel P. Kumpik ◽  
Connor Campbell ◽  
Jan W.H. Schnupp ◽  
Andrew J King

AbstractSound localization requires the integration in the brain of auditory spatial cues generated by interactions with the external ears, head and body. Perceptual learning studies have shown that the relative weighting of these cues can change in a context-dependent fashion if their relative reliability is altered. One factor that may influence this process is vision, which tends to dominate localization judgments when both modalities are present and induces a recalibration of auditory space if they become misaligned. It is not known, however, whether vision can alter the weighting of individual auditory localization cues. Using non-individualized head-related transfer functions, we measured changes in subjects’ sound localization biases and binaural localization cue weights after ~55 minutes of training on an audiovisual spatial oddball task. Four different configurations of spatial congruence between visual and auditory cues (interaural time differences (ITDs) and frequency-dependent interaural level differences (interaural level spectra, ILS) were used. When visual cues were spatially congruent with both auditory spatial cues, we observed an improvement in sound localization, as shown by a reduction in the variance of subjects’ localization biases, which was accompanied by an up-weighting of the more salient ILS cue. However, if the position of either one of the auditory cues was randomized during training, no overall improvement in sound localization occurred. Nevertheless, the spatial gain of whichever cue was matched with vision increased, with different effects observed on the gain for the randomized cue depending on whether ITDs or ILS were matched with vision. As a result, we observed a similar up-weighting in ILS when this cue alone was matched with vision, but no overall change in binaural cue weighting when ITDs corresponded to the visual cues and ILS were randomized. Consistently misaligning both cues with vision produced the ventriloquism aftereffect, i.e., a corresponding shift in auditory localization bias, without affecting the variability of the subjects’ sound localization judgments, and no overall change in binaural cue weighting. These data show that visual contextual information can invoke a reweighting of auditory localization cues, although concomitant improvements in sound localization are only likely to accompany training with fully congruent audiovisual information.


2013 ◽  
Vol 21 (6) ◽  
pp. 1028-1033
Author(s):  
Huixian MEI ◽  
Zhijin ZHOU ◽  
Qicai CHEN

2010 ◽  
Vol 103 (3) ◽  
pp. 1209-1225 ◽  
Author(s):  
Fernando R. Nodal ◽  
Oliver Kacelnik ◽  
Victoria M. Bajo ◽  
Jennifer K. Bizley ◽  
David R. Moore ◽  
...  

The role of auditory cortex in sound localization and its recalibration by experience was explored by measuring the accuracy with which ferrets turned toward and approached the source of broadband sounds in the horizontal plane. In one group, large bilateral lesions were made of the middle ectosylvian gyrus, where the primary auditory cortical fields are located, and part of the anterior and/or posterior ectosylvian gyrus, which contain higher-level fields. In the second group, the lesions were intended to be confined to primary auditory cortex (A1). The ability of the animals to localize noise bursts of different duration and level was measured before and after the lesions were made. A1 lesions produced a modest disruption of approach-to-target responses to short-duration stimuli (<500 ms) on both sides of space, whereas head orienting accuracy was unaffected. More extensive lesions produced much greater auditory localization deficits, again primarily for shorter sounds. In these ferrets, the accuracy of both the approach-to-target behavior and the orienting responses was impaired, and they could do little more than correctly lateralize the stimuli. Although both groups of ferrets were still able to localize long-duration sounds accurately, they were, in contrast to ferrets with an intact auditory cortex, unable to relearn to localize these stimuli after altering the spatial cues available by reversibly plugging one ear. These results indicate that both primary and nonprimary cortical areas are necessary for normal sound localization, although only higher auditory areas seem to contribute to accurate head orienting behavior. They also show that the auditory cortex, and A1 in particular, plays an essential role in training-induced plasticity in adult ferrets, and that this is the case for both head orienting responses and approach-to-target behavior.


Sign in / Sign up

Export Citation Format

Share Document