Reversible inactivation of ferret auditory cortex impairs spatial and non-spatial hearing

2021 ◽  
Author(s):  
Stephen Michael Town ◽  
Katherine C Wood ◽  
Katarina C Poole ◽  
Jennifer Kim Bizley

A central question in auditory neuroscience is how far brain regions are functionally specialized for processing specific sound features such as sound location and identity. In auditory cortex, correlations between neural activity and sounds support both the specialization of distinct cortical subfields, and encoding of multiple sound features within individual cortical areas. However, few studies have tested the causal contribution of auditory cortex to hearing in multiple contexts. Here we tested the role of auditory cortex in both spatial and non-spatial hearing. We reversibly inactivated the border between middle and posterior ectosylvian gyrus using cooling (n = 2) or optogenetics (n=1) as ferrets discriminated vowel sounds in clean and noisy conditions. Animals with cooling loops were then retrained to localize noise-bursts from multiple locations and retested with cooling. In both ferrets, cooling impaired sound localization and vowel discrimination in noise, but not discrimination in clean conditions. We also tested the effects of cooling on vowel discrimination in noise when vowel and noise were colocated or spatially separated. Here, cooling exaggerated deficits discriminating vowels with colocalized noise, resulting in increased performance benefits from spatial separation of sounds and thus stronger spatial release from masking during cortical inactivation. Together our results show that auditory cortex contributes to both spatial and non-spatial hearing, consistent with single unit recordings in the same brain region. The deficits we observed did not reflect general impairments in hearing, but rather account for performance in more realistic behaviors that require use of information about both sound location and identity.

F1000Research ◽  
2018 ◽  
Vol 7 ◽  
pp. 1555 ◽  
Author(s):  
Andrew J. King ◽  
Sundeep Teki ◽  
Ben D.B. Willmore

Our ability to make sense of the auditory world results from neural processing that begins in the ear, goes through multiple subcortical areas, and continues in the cortex. The specific contribution of the auditory cortex to this chain of processing is far from understood. Although many of the properties of neurons in the auditory cortex resemble those of subcortical neurons, they show somewhat more complex selectivity for sound features, which is likely to be important for the analysis of natural sounds, such as speech, in real-life listening conditions. Furthermore, recent work has shown that auditory cortical processing is highly context-dependent, integrates auditory inputs with other sensory and motor signals, depends on experience, and is shaped by cognitive demands, such as attention. Thus, in addition to being the locus for more complex sound selectivity, the auditory cortex is increasingly understood to be an integral part of the network of brain regions responsible for prediction, auditory perceptual decision-making, and learning. In this review, we focus on three key areas that are contributing to this understanding: the sound features that are preferentially represented by cortical neurons, the spatial organization of those preferences, and the cognitive roles of the auditory cortex.


2021 ◽  
Vol 150 (4) ◽  
pp. A304-A304
Author(s):  
Yonghee Oh ◽  
Hannah Schoenfeld ◽  
Allison O. Layne ◽  
Sarah E. Bridges

2002 ◽  
Vol 88 (1) ◽  
pp. 540-543 ◽  
Author(s):  
John J. Foxe ◽  
Glenn R. Wylie ◽  
Antigona Martinez ◽  
Charles E. Schroeder ◽  
Daniel C. Javitt ◽  
...  

Using high-field (3 Tesla) functional magnetic resonance imaging (fMRI), we demonstrate that auditory and somatosensory inputs converge in a subregion of human auditory cortex along the superior temporal gyrus. Further, simultaneous stimulation in both sensory modalities resulted in activity exceeding that predicted by summing the responses to the unisensory inputs, thereby showing multisensory integration in this convergence region. Recently, intracranial recordings in macaque monkeys have shown similar auditory-somatosensory convergence in a subregion of auditory cortex directly caudomedial to primary auditory cortex (area CM). The multisensory region identified in the present investigation may be the human homologue of CM. Our finding of auditory-somatosensory convergence in early auditory cortices contributes to mounting evidence for multisensory integration early in the cortical processing hierarchy, in brain regions that were previously assumed to be unisensory.


2020 ◽  
Vol 31 (04) ◽  
pp. 271-276
Author(s):  
Grant King ◽  
Nicole E. Corbin ◽  
Lori J. Leibold ◽  
Emily Buss

Abstract Background Speech recognition in complex multisource environments is challenging, particularly for listeners with hearing loss. One source of difficulty is the reduced ability of listeners with hearing loss to benefit from spatial separation of the target and masker, an effect called spatial release from masking (SRM). Despite the prevalence of complex multisource environments in everyday life, SRM is not routinely evaluated in the audiology clinic. Purpose The purpose of this study was to demonstrate the feasibility of assessing SRM in adults using widely available tests of speech-in-speech recognition that can be conducted using standard clinical equipment. Research Design Participants were 22 young adults with normal hearing. The task was masked sentence recognition, using each of five clinically available corpora with speech maskers. The target always sounded like it originated from directly in front of the listener, and the masker either sounded like it originated from the front (colocated with the target) or from the side (separated from the target). In the real spatial manipulation conditions, source location was manipulated by routing the target and masker to either a single speaker or to two speakers: one directly in front of the participant, and one mounted in an adjacent corner, 90° to the right. In the perceived spatial separation conditions, the target and masker were presented from both speakers with delays that made them sound as if they were either colocated or separated. Results With real spatial manipulations, the mean SRM ranged from 7.1 to 11.4 dB, depending on the speech corpus. With perceived spatial manipulations, the mean SRM ranged from 1.8 to 3.1 dB. Whereas real separation improves the signal-to-noise ratio in the ear contralateral to the masker, SRM in the perceived spatial separation conditions is based solely on interaural timing cues. Conclusions The finding of robust SRM with widely available speech corpora supports the feasibility of measuring this important aspect of hearing in the audiology clinic. The finding of a small but significant SRM in the perceived spatial separation conditions suggests that modified materials could be used to evaluate the use of interaural timing cues specifically.


2019 ◽  
Vol 121 (4) ◽  
pp. 1501-1512 ◽  
Author(s):  
Stephen Gareth Hörpel ◽  
Uwe Firzlaff

Bats use a large repertoire of calls for social communication. In the bat Phyllostomus discolor, social communication calls are often characterized by sinusoidal amplitude and frequency modulations with modulation frequencies in the range of 100–130 Hz. However, peaks in mammalian auditory cortical modulation transfer functions are typically limited to modulation frequencies below 100 Hz. We investigated the coding of sinusoidally amplitude modulated sounds in auditory cortical neurons in P. discolor by constructing rate and temporal modulation transfer functions. Neuronal responses to playbacks of various communication calls were additionally recorded and compared with the neurons’ responses to sinusoidally amplitude-modulated sounds. Cortical neurons in the posterior dorsal field of the auditory cortex were tuned to unusually high modulation frequencies: rate modulation transfer functions often peaked around 130 Hz (median: 87 Hz), and the median of the highest modulation frequency that evoked significant phase-locking was also 130 Hz. Both values are much higher than reported from the auditory cortex of other mammals, with more than 51% of the units preferring modulation frequencies exceeding 100 Hz. Conspicuously, the fast modulations preferred by the neurons match the fast amplitude and frequency modulations of prosocial, and mostly of aggressive, communication calls in P. discolor. We suggest that the preference for fast amplitude modulations in the P. discolor dorsal auditory cortex serves to reliably encode the fast modulations seen in their communication calls. NEW & NOTEWORTHY Neural processing of temporal sound features is crucial for the analysis of communication calls. In bats, these calls are often characterized by fast temporal envelope modulations. Because auditory cortex neurons typically encode only low modulation frequencies, it is unclear how species-specific vocalizations are cortically processed. We show that auditory cortex neurons in the bat Phyllostomus discolor encode fast temporal envelope modulations. This property improves response specificity to communication calls and thus might support species-specific communication.


1998 ◽  
Vol 80 (5) ◽  
pp. 2743-2764 ◽  
Author(s):  
Jos J. Eggermont

Eggermont, Jos J. Representation of spectral and temporal sound features in three cortical fields of the cat. Similarities outweigh differences. J. Neurophysiol. 80: 2743–2764, 1998. This study investigates the degree of similarity of three different auditory cortical areas with respect to the coding of periodic stimuli. Simultaneous single- and multiunit recordings in response to periodic stimuli were made from primary auditory cortex (AI), anterior auditory field (AAF), and secondary auditory cortex (AII) in the cat to addresses the following questions: is there, within each cortical area, a difference in the temporal coding of periodic click trains, amplitude-modulated (AM) noise bursts, and AM tone bursts? Is there a difference in this coding between the three cortical fields? Is the coding based on the temporal modulation transfer function (tMTF) and on the all-order interspike-interval (ISI) histogram the same? Is the perceptual distinction between rhythm and roughness for AM stimuli related to a temporal versus spatial representation of AM frequency in auditory cortex? Are interarea differences in temporal response properties related to differences in frequency tuning? The results showed that: 1) AM stimuli produce much higher best modulation frequencies (BMFs) and limiting rates than periodic click trains. 2) For periodic click trains and AM noise, the BMFs and limiting rates were not significantly different for the three areas. However, for AM tones the BMF and limiting rates were about a factor 2 lower in AAF compared with the other areas. 3) The representation of stimulus periodicity in ISIs resulted in significantly lower mean BMFs and limiting rates compared with those estimated from the tMTFs. The difference was relatively small for periodic click trains but quite large for both AM stimuli, especially in AI and AII. 4) Modulation frequencies <20 Hz were represented in the ISIs, suggesting that rhythm is coded in auditory cortex in temporal fashion. 5) In general only a modest interdependence of spectral- and temporal-response properties in AI and AII was found. The BMFs were correlated positively with characteristic frequency in AAF. The limiting rate was positively correlated with the frequency-tuning curve bandwidth in AI and AII but not in AAF. Only in AAF was a correlation between BMF and minimum latency was found. Thus whereas differences were found in the frequency-tuning curve bandwidth and minimum response latencies among the three areas, the coding of periodic stimuli in these areas was fairly similar with the exception of the very poor representation of AM tones in AII. This suggests a strong parallel processing organization in auditory cortex.


Sign in / Sign up

Export Citation Format

Share Document