scholarly journals Functional Asymmetry for Auditory Processing in Human Primary Auditory Cortex

2003 ◽  
Vol 23 (37) ◽  
pp. 11516-11522 ◽  
Author(s):  
Joseph T. Devlin ◽  
Josephine Raley ◽  
Elizabeth Tunbridge ◽  
Katherine Lanary ◽  
Anna Floyer-Lea ◽  
...  
2004 ◽  
Vol 100 (3) ◽  
pp. 617-625 ◽  
Author(s):  
Wolfgang Heinke ◽  
Ramona Kenntner ◽  
Thomas C. Gunter ◽  
Daniela Sammler ◽  
Derk Olthoff ◽  
...  

Background It is an open question whether cognitive processes of auditory perception that are mediated by functionally different cortices exhibit the same sensitivity to sedation. The auditory event-related potentials P1, mismatch negativity (MMN), and early right anterior negativity (ERAN) originate from different cortical areas and reflect different stages of auditory processing. The P1 originates mainly from the primary auditory cortex. The MMN is generated in or in the close vicinity of the primary auditory cortex but is also dependent on frontal sources. The ERAN mainly originates from frontal generators. The purpose of the study was to investigate the effects of increasing propofol sedation on different stages of auditory processing as reflected in P1, MMN, and ERAN. Methods The P1, the MMN, and the ERAN were recorded preoperatively in 18 patients during four levels of anesthesia adjusted with target-controlled infusion: awake state (target concentration of propofol 0.0 microg/ml), light sedation (0.5 microg/ml), deep sedation (1.5 microg/ml), and unconsciousness (2.5-3.0 microg/ml). Simultaneously, propofol anesthesia was assessed using the Bispectral Index. Results Propofol sedation resulted in a progressive decrease in amplitudes and an increase of latencies with a similar pattern for MMN and ERAN. MMN and ERAN were elicited during sedation but were abolished during unconsciousness. In contrast, the amplitude of the P1 was unchanged by sedation but markedly decreased during unconsciousness. Conclusion The results indicate differential effects of propofol sedation on cognitive functions that involve mainly the auditory cortices and cognitive functions that involve the frontal cortices.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Daniela Saderi ◽  
Zachary P Schwartz ◽  
Charles R Heller ◽  
Jacob R Pennington ◽  
Stephen V David

Both generalized arousal and engagement in a specific task influence sensory neural processing. To isolate effects of these state variables in the auditory system, we recorded single-unit activity from primary auditory cortex (A1) and inferior colliculus (IC) of ferrets during a tone detection task, while monitoring arousal via changes in pupil size. We used a generalized linear model to assess the influence of task engagement and pupil size on sound-evoked activity. In both areas, these two variables affected independent neural populations. Pupil size effects were more prominent in IC, while pupil and task engagement effects were equally likely in A1. Task engagement was correlated with larger pupil; thus, some apparent effects of task engagement should in fact be attributed to fluctuations in pupil size. These results indicate a hierarchy of auditory processing, where generalized arousal enhances activity in midbrain, and effects specific to task engagement become more prominent in cortex.


2021 ◽  
Author(s):  
Pilar Montes-Lourido ◽  
Manaswini Kar ◽  
Stephen V David ◽  
Srivatsun Sadagopan

Early in auditory processing, neural responses faithfully reflect acoustic input. At higher stages of auditory processing, however, neurons become selective for particular call types, eventually leading to specialized regions of cortex that preferentially process calls at the highest auditory processing stages. We previously proposed that an intermediate step in how non-selective responses are transformed into call-selective responses is the detection of informative call features. But how neural selectivity for informative call features emerges from non-selective inputs, whether feature selectivity gradually emerges over the processing hierarchy, and how stimulus information is represented in non-selective and feature-selective populations remain open questions. In this study, using unanesthetized guinea pigs, a highly vocal and social rodent, as an animal model, we characterized the neural representation of calls in three auditory processing stages: the thalamus (vMGB), and thalamorecipient (L4) and superficial layers (L2/3) of primary auditory cortex (A1). We found that neurons in vMGB and A1 L4 did not exhibit call-selective responses and responded throughout the call durations. However, A1 L2/3 neurons showed high call-selectivity with about a third of neurons responding to only one or two call types. These A1 L2/3 neurons only responded to restricted portions of calls suggesting that they were highly selective for call features. Receptive fields of these A1 L2/3 neurons showed complex spectrotemporal structures that could underlie their high call feature selectivity. Information theoretic analysis revealed that in A1 L4 stimulus information was distributed over the population and was spread out over the call durations. In contrast, in A1 L2/3, individual neurons showed brief bursts of high stimulus-specific information, and conveyed high levels of information per spike. These data demonstrate that a transformation in the neural representation of calls occurs between A1 L4 and A1 L2/3, leading to the emergence of a feature-based representation of calls in A1 L2/3. Our data thus suggest that observed cortical specializations for call processing emerge in A1, and set the stage for further mechanistic studies.


2020 ◽  
Author(s):  
Daniela Saderi ◽  
Zachary P. Schwartz ◽  
Charlie R. Heller ◽  
Jacob R. Pennington ◽  
Stephen V. David

AbstractThe brain’s representation of sound is influenced by multiple aspects of internal behavioral state. Following engagement in an auditory discrimination task, both generalized arousal and task-specific control signals can influence auditory processing. To isolate effects of these state variables on auditory processing, we recorded single-unit activity from primary auditory cortex (A1) and the inferior colliculus (IC) of ferrets as they engaged in a go/no-go tone detection task while simultaneously monitoring arousal via pupillometry. We used a generalized linear model to isolate the contributions of task engagement and arousal on spontaneous and evoked neural activity. Fluctuations in pupil-indexed arousal were correlated with task engagement, but these two variables could be dissociated in most experiments. In both A1 and IC, individual units could be modulated by task and/or arousal, but the two state variables affected independent neural populations. Arousal effects were more prominent in IC, while arousal and engagement effects occurred with about equal frequency in A1. These results indicate that some changes in neural activity attributed to task engagement in previous studies should in fact be attributed to global fluctuations in arousal. Arousal effects also explain some persistent changes in neural activity observed in passive conditions post-behavior. Together, these results indicate a hierarchy in the auditory system, where generalized arousal enhances activity in the midbrain and cortex, while task-specific changes in neural coding become more prominent in cortex.


2012 ◽  
Vol 107 (12) ◽  
pp. 3296-3307 ◽  
Author(s):  
Nadja Schinkel-Bielefeld ◽  
Stephen V. David ◽  
Shihab A. Shamma ◽  
Daniel A. Butts

Intracellular studies have revealed the importance of cotuned excitatory and inhibitory inputs to neurons in auditory cortex, but typical spectrotemporal receptive field models of neuronal processing cannot account for this overlapping tuning. Here, we apply a new nonlinear modeling framework to extracellular data recorded from primary auditory cortex (A1) that enables us to explore how the interplay of excitation and inhibition contributes to the processing of complex natural sounds. The resulting description produces more accurate predictions of observed spike trains than the linear spectrotemporal model, and the properties of excitation and inhibition inferred by the model are furthermore consistent with previous intracellular observations. It can also describe several nonlinear properties of A1 that are not captured by linear models, including intensity tuning and selectivity to sound onsets and offsets. These results thus offer a broader picture of the computational role of excitation and inhibition in A1 and support the hypothesis that their interactions play an important role in the processing of natural auditory stimuli.


2019 ◽  
Vol 30 (2) ◽  
pp. 618-627 ◽  
Author(s):  
Deborah F Levy ◽  
Stephen M Wilson

AbstractSpeech perception involves mapping from a continuous and variable acoustic speech signal to discrete, linguistically meaningful units. However, it is unclear where in the auditory processing stream speech sound representations cease to be veridical (faithfully encoding precise acoustic properties) and become categorical (encoding sounds as linguistic categories). In this study, we used functional magnetic resonance imaging and multivariate pattern analysis to determine whether tonotopic primary auditory cortex (PAC), defined as tonotopic voxels falling within Heschl’s gyrus, represents one class of speech sounds—vowels—veridically or categorically. For each of 15 participants, 4 individualized synthetic vowel stimuli were generated such that the vowels were equidistant in acoustic space, yet straddled a categorical boundary (with the first 2 vowels perceived as [i] and the last 2 perceived as [i]). Each participant’s 4 vowels were then presented in a block design with an irrelevant but attention-demanding level change detection task. We found that in PAC bilaterally, neural discrimination between pairs of vowels that crossed the categorical boundary was more accurate than neural discrimination between equivalently spaced vowel pairs that fell within a category. These findings suggest that PAC does not represent vowel sounds veridically, but that encoding of vowels is shaped by linguistically relevant phonemic categories.


2014 ◽  
Vol 74 (10) ◽  
pp. 972-986 ◽  
Author(s):  
C.T. Engineer ◽  
T.M. Centanni ◽  
K.W. Im ◽  
M.S. Borland ◽  
N.A. Moreno ◽  
...  

2012 ◽  
Vol 107 (8) ◽  
pp. 2042-2056 ◽  
Author(s):  
Tobias Overath ◽  
Yue Zhang ◽  
Dan H. Sanes ◽  
David Poeppel

Hierarchical models of auditory processing often posit that optimal stimuli, i.e., those eliciting a maximal neural response, will increase in bandwidth and decrease in modulation rate as one ascends the auditory neuraxis. Here, we tested how bandwidth and modulation rate interact at several loci along the human central auditory pathway using functional MRI in a cardiac-gated, sparse acquisition design. Participants listened passively to both narrowband (NB) and broadband (BB) carriers (1/4- or 4-octave pink noise), which were jittered about a mean sinusoidal amplitude modulation rate of 0, 3, 29, or 57 Hz. The jittering was introduced to minimize stimulus-specific adaptation. The results revealed a clear difference between spectral bandwidth and temporal modulation rate: sensitivity to bandwidth (BB > NB) decreased from subcortical structures to nonprimary auditory cortex, whereas sensitivity to slow modulation rates was largest in nonprimary auditory cortex and largely absent in subcortical structures. Furthermore, there was no parametric interaction between bandwidth and modulation rate. These results challenge simple hierarchical models, in that BB stimuli evoked stronger responses in primary auditory cortex (and subcortical structures) rather than nonprimary cortex. Furthermore, the strong preference for slow modulation rates in nonprimary cortex demonstrates the compelling global sensitivity of auditory cortex to modulation rates that are dominant in the principal signals that we process, e.g., speech.


PLoS Biology ◽  
2021 ◽  
Vol 19 (6) ◽  
pp. e3001299
Author(s):  
Pilar Montes-Lourido ◽  
Manaswini Kar ◽  
Stephen V. David ◽  
Srivatsun Sadagopan

Early in auditory processing, neural responses faithfully reflect acoustic input. At higher stages of auditory processing, however, neurons become selective for particular call types, eventually leading to specialized regions of cortex that preferentially process calls at the highest auditory processing stages. We previously proposed that an intermediate step in how nonselective responses are transformed into call-selective responses is the detection of informative call features. But how neural selectivity for informative call features emerges from nonselective inputs, whether feature selectivity gradually emerges over the processing hierarchy, and how stimulus information is represented in nonselective and feature-selective populations remain open question. In this study, using unanesthetized guinea pigs (GPs), a highly vocal and social rodent, as an animal model, we characterized the neural representation of calls in 3 auditory processing stages—the thalamus (ventral medial geniculate body (vMGB)), and thalamorecipient (L4) and superficial layers (L2/3) of primary auditory cortex (A1). We found that neurons in vMGB and A1 L4 did not exhibit call-selective responses and responded throughout the call durations. However, A1 L2/3 neurons showed high call selectivity with about a third of neurons responding to only 1 or 2 call types. These A1 L2/3 neurons only responded to restricted portions of calls suggesting that they were highly selective for call features. Receptive fields of these A1 L2/3 neurons showed complex spectrotemporal structures that could underlie their high call feature selectivity. Information theoretic analysis revealed that in A1 L4, stimulus information was distributed over the population and was spread out over the call durations. In contrast, in A1 L2/3, individual neurons showed brief bursts of high stimulus-specific information and conveyed high levels of information per spike. These data demonstrate that a transformation in the neural representation of calls occurs between A1 L4 and A1 L2/3, leading to the emergence of a feature-based representation of calls in A1 L2/3. Our data thus suggest that observed cortical specializations for call processing emerge in A1 and set the stage for further mechanistic studies.


2020 ◽  
Vol 30 (8) ◽  
pp. 4481-4495
Author(s):  
H Azimi ◽  
A-L Klaassen ◽  
K Thomas ◽  
M A Harvey ◽  
G Rainer

Abstract Many studies have implicated the basal forebrain (BF) as a potent regulator of sensory encoding even at the earliest stages of or cortical processing. The source of this regulation involves the well-documented corticopetal cholinergic projections from BF to primary cortical areas. However, the BF also projects to subcortical structures, including the thalamic reticular nucleus (TRN), which has abundant reciprocal connections with sensory thalamus. Here we present naturalistic auditory stimuli to the anesthetized rat while making simultaneous single-unit recordings from the ventral medial geniculate nucleus (MGN) and primary auditory cortex (A1) during electrical stimulation of the BF. Like primary visual cortex, we find that BF stimulation increases the trial-to-trial reliability of A1 neurons, and we relate these results to change in the response properties of MGN neurons. We discuss several lines of evidence that implicate the BF to thalamus pathway in the manifestation of BF-induced changes to cortical sensory processing and support our conclusions with supplementary TRN recordings, as well as studies in awake animals showing a strong relationship between endogenous BF activity and A1 reliability. Our findings suggest that the BF subcortical projections that modulate MGN play an important role in auditory processing.


Sign in / Sign up

Export Citation Format

Share Document