scholarly journals Diverse processing underlying frequency integration in midbrain neurons of barn owls

2021 ◽  
Vol 17 (11) ◽  
pp. e1009569
Author(s):  
Julia C. Gorman ◽  
Oliver L. Tufte ◽  
Anna V. R. Miller ◽  
William M. DeBello ◽  
José L. Peña ◽  
...  

Emergent response properties of sensory neurons depend on circuit connectivity and somatodendritic processing. Neurons of the barn owl’s external nucleus of the inferior colliculus (ICx) display emergence of spatial selectivity. These neurons use interaural time difference (ITD) as a cue for the horizontal direction of sound sources. ITD is detected by upstream brainstem neurons with narrow frequency tuning, resulting in spatially ambiguous responses. This spatial ambiguity is resolved by ICx neurons integrating inputs over frequency, a relevant processing in sound localization across species. Previous models have predicted that ICx neurons function as point neurons that linearly integrate inputs across frequency. However, the complex dendritic trees and spines of ICx neurons raises the question of whether this prediction is accurate. Data from in vivo intracellular recordings of ICx neurons were used to address this question. Results revealed diverse frequency integration properties, where some ICx neurons showed responses consistent with the point neuron hypothesis and others with nonlinear dendritic integration. Modeling showed that varied connectivity patterns and forms of dendritic processing may underlie observed ICx neurons’ frequency integration processing. These results corroborate the ability of neurons with complex dendritic trees to implement diverse linear and nonlinear integration of synaptic inputs, of relevance for adaptive coding and learning, and supporting a fundamental mechanism in sound localization.

2006 ◽  
Vol 95 (6) ◽  
pp. 3571-3584 ◽  
Author(s):  
Matthew W. Spitzer ◽  
Terry T. Takahashi

We examined the accuracy and precision with which the barn owl ( Tyto alba) turns its head toward sound sources under conditions that evoke the precedence effect (PE) in humans. Stimuli consisted of 25-ms noise bursts emitted from two sources, separated horizontally by 40°, and temporally by 3–50 ms. At delays from 3 to 10 ms, head turns were always directed at the leading source, and were nearly as accurate and precise as turns toward single sources, indicating that the leading source dominates perception. This lead dominance is particularly remarkable, first, because on some trials, the lagging source was significantly higher in amplitude than the lead, arising from the directionality of the owl's ears, and second, because the temporal overlap of the two sounds can degrade the binaural cues with which the owl localizes sounds. With increasing delays, the influence of the lagging source became apparent as the head saccades became increasingly biased toward the lagging source. Furthermore, on some of the trials at delays ≥20 ms, the owl turned its head, first, in the direction of one source, and then the other, suggesting that it was able to resolve two separately localizable sources. At all delays <50 ms, response latencies were longer for paired sources than for single sources. With the possible exception of response latency, these findings demonstrate that the owl exhibits precedence phenomena in sound localization similar to those in humans and cats, and provide a basis for comparison with neurophysiological data.


2013 ◽  
Vol 109 (4) ◽  
pp. 924-931 ◽  
Author(s):  
Caitlin S. Baxter ◽  
Brian S. Nelson ◽  
Terry T. Takahashi

Echoes and sounds of independent origin often obscure sounds of interest, but echoes can go undetected under natural listening conditions, a perception called the precedence effect. How does the auditory system distinguish between echoes and independent sources? To investigate, we presented two broadband noises to barn owls ( Tyto alba) while varying the similarity of the sounds' envelopes. The carriers of the noises were identical except for a 2- or 3-ms delay. Their onsets and offsets were also synchronized. In owls, sound localization is guided by neural activity on a topographic map of auditory space. When there are two sources concomitantly emitting sounds with overlapping amplitude spectra, space map neurons discharge when the stimulus in their receptive field is louder than the one outside it and when the averaged amplitudes of both sounds are rising. A model incorporating these features calculated the strengths of the two sources' representations on the map (B. S. Nelson and T. T. Takahashi; Neuron 67: 643–655, 2010). The target localized by the owls could be predicted from the model's output. The model also explained why the echo is not localized at short delays: when envelopes are similar, peaks in the leading sound mask corresponding peaks in the echo, weakening the echo's space map representation. When the envelopes are dissimilar, there are few or no corresponding peaks, and the owl localizes whichever source is predicted by the model to be less masked. Thus the precedence effect in the owl is a by-product of a mechanism for representing multiple sound sources on its map.


2020 ◽  
Vol 123 (5) ◽  
pp. 1791-1807 ◽  
Author(s):  
Ryan Dorkoski ◽  
Kenneth E. Hancock ◽  
Gareth A. Whaley ◽  
Timothy R. Wohl ◽  
Noelle C. Stroud ◽  
...  

A “division of labor” has previously been assumed in which the directions of low- and high-frequency sound sources are thought to be encoded by neurons preferentially sensitive to low and high frequencies, respectively. Contrary to this, we found that auditory midbrain neurons encode the directions of both low- and high-frequency sounds regardless of their preferred frequencies. Neural responses were shaped by different sound localization cues depending on the stimulus spectrum—even within the same neuron.


2017 ◽  
Vol 118 (3) ◽  
pp. 1871-1887
Author(s):  
Philipp Tellers ◽  
Jessica Lehmann ◽  
Hartmut Führ ◽  
Hermann Wagner

Birds and mammals use the interaural time difference (ITD) for azimuthal sound localization. While barn owls can use the ITD of the stimulus carrier frequency over nearly their entire hearing range, mammals have to utilize the ITD of the stimulus envelope to extend the upper frequency limit of ITD-based sound localization. ITD is computed and processed in a dedicated neural circuit that consists of two pathways. In the barn owl, ITD representation is more complex in the forebrain than in the midbrain pathway because of the combination of two inputs that represent different ITDs. We speculated that one of the two inputs includes an envelope contribution. To estimate the envelope contribution, we recorded ITD response functions for correlated and anticorrelated noise stimuli in the barn owl’s auditory arcopallium. Our findings indicate that barn owls, like mammals, represent both carrier and envelope ITDs of overlapping frequency ranges, supporting the hypothesis that carrier and envelope ITD-based localization are complementary beyond a mere extension of the upper frequency limit. NEW & NOTEWORTHY The results presented in this study show for the first time that the barn owl is able to extract and represent the interaural time difference (ITD) information conveyed by the envelope of a broadband acoustic signal. Like mammals, the barn owl extracts the ITD of the envelope and the carrier of a signal from the same frequency range. These results are of general interest, since they reinforce a trend found in neural signal processing across different species.


eLife ◽  
2014 ◽  
Vol 3 ◽  
Author(s):  
Fanny Cazettes ◽  
Brian J Fischer ◽  
Jose L Pena

The robust representation of the environment from unreliable sensory cues is vital for the efficient function of the brain. However, how the neural processing captures the most reliable cues is unknown. The interaural time difference (ITD) is the primary cue to localize sound in horizontal space. ITD is encoded in the firing rate of neurons that detect interaural phase difference (IPD). Due to the filtering effect of the head, IPD for a given location varies depending on the environmental context. We found that, in barn owls, at each location there is a frequency range where the head filtering yields the most reliable IPDs across contexts. Remarkably, the frequency tuning of space-specific neurons in the owl's midbrain varies with their preferred sound location, matching the range that carries the most reliable IPD. Thus, frequency tuning in the owl's space-specific neurons reflects a higher-order feature of the code that captures cue reliability.


eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
Aaron Benson Wong ◽  
J Gerard G Borst

The dorsal (DCIC) and lateral cortices (LCIC) of the inferior colliculus are major targets of the auditory and non-auditory cortical areas, suggesting a role in complex multimodal information processing. However, relatively little is known about their functional organization. We utilized in vivo two-photon Ca2+ imaging in awake mice expressing GCaMP6s in GABAergic or non-GABAergic neurons in the IC to investigate their spatial organization. We found different classes of temporal responses, which we confirmed with simultaneous juxtacellular electrophysiology. Both GABAergic and non-GABAergic neurons showed spatial microheterogeneity in their temporal responses. In contrast, a robust, double rostromedial-caudolateral gradient of frequency tuning was conserved between the two groups, and even among the subclasses. This, together with the existence of a subset of neurons sensitive to spontaneous movements, provides functional evidence for redefining the border between DCIC and LCIC.


Acta Acustica ◽  
2020 ◽  
Vol 5 ◽  
pp. 3
Author(s):  
Aida Hejazi Nooghabi ◽  
Quentin Grimal ◽  
Anthony Herrel ◽  
Michael Reinwald ◽  
Lapo Boschi

We implement a new algorithm to model acoustic wave propagation through and around a dolphin skull, using the k-Wave software package [1]. The equation of motion is integrated numerically in a complex three-dimensional structure via a pseudospectral scheme which, importantly, accounts for lateral heterogeneities in the mechanical properties of bone. Modeling wave propagation in the skull of dolphins contributes to our understanding of how their sound localization and echolocation mechanisms work. Dolphins are known to be highly effective at localizing sound sources; in particular, they have been shown to be equally sensitive to changes in the elevation and azimuth of the sound source, while other studied species, e.g. humans, are much more sensitive to the latter than to the former. A laboratory experiment conducted by our team on a dry skull [2] has shown that sound reverberated in bones could possibly play an important role in enhancing localization accuracy, and it has been speculated that the dolphin sound localization system could somehow rely on the analysis of this information. We employ our new numerical model to simulate the response of the same skull used by [2] to sound sources at a wide and dense set of locations on the vertical plane. This work is the first step towards the implementation of a new tool for modeling source (echo)location in dolphins; in future work, this will allow us to effectively explore a wide variety of emitted signals and anatomical features.


2021 ◽  
Author(s):  
Hsin-Wei Lu ◽  
Philip H Smith ◽  
Philip Joris

Octopus cells are remarkable projection neurons of the mammalian cochlear nucleus, with extremely fast membranes and wide frequency tuning. They are considered prime examples of coincidence detectors but are poorly characterized in vivo. We discover that octopus cells are selective to frequency sweep direction, a feature that is absent in their auditory nerve inputs. In vivo intracellular recordings reveal that direction selectivity does not derive from cross-channel coincidence detection but hinges on the amplitudes and activation sequence of auditory nerve inputs tuned to clusters of hotspot frequencies. A simple biophysical model of octopus cells excited with real nerve spike trains recreates direction selectivity through interaction of intrinsic membrane conductances with activation sequence of clustered inputs. We conclude that octopus cells are sequence detectors, sensitive to temporal patterns across cochlear frequency channels. The detection of sequences rather than coincidences is a much simpler but powerful operation to extract temporal information.


Sign in / Sign up

Export Citation Format

Share Document